AES San Francisco 2012
Live Sound Track Event Details

Friday, October 26, 9:00 am — 10:00 am (Room 124)

TC Meeting: Acoustics and Sound Reinforcement

Technical Committee Meeting on Acoustics and Sound Reinforcement


Friday, October 26, 9:00 am — 11:00 am (Room 121)

Paper Session: P1 - Amplifiers and Equipment

Jayant Datta, THX - San Francisco, CA, USA; Syracuse University - Syracuse, NY, USA

P1-1 A Low-Voltage Low-Power Output Stage for Class-G Headphone AmplifiersAlexandre Huffenus, EASii IC - Grenoble, France
This paper proposes a new headphone amplifier circuit architecture, which output stage can be powered with very low supply rails from ±1.8 V to ±0.2 V. When used inside a Class-G amplifier, with the switched mode power supply powering the output stage, the power consumption can be significantly reduced. For a typical listening level of 2x100µW, the increase in power consumption compared to idle is only 0.7mW, instead of 2.5mW to 3mW for existing amplifiers. In battery-powered devices like smartphones or portable music players, this can increase the battery life of more than 15% during audio playback. Theory of operation, electrical performance and a comparison with the actual state of the art will be detailed.
Convention Paper 8684 (Purchase now)

P1-2 Switching/Linear Hybrid Audio Power Amplifiers for Domestic Applications, Part 2: The Class-B+D AmplifierHarry Dymond, University of Bristol - Bristol, UK; Phil Mellor, University of Bristol - Bristol, UK
The analysis and design of a series switching/linear hybrid audio power amplifier rated at 100 W into 8 O are presented. A high-fidelity linear stage controls the output, while the floating mid-point of the power supply for this linear stage is driven by a switching stage. This keeps the voltage across the linear stage output transistors low, enhancing efficiency. Analysis shows that the frequency responses of the linear and switching stages must be tightly matched to avoid saturation of the linear stage output transistors. The switching stage employs separate DC and AC feedback loops in order to minimize the adverse effects of the floating-supply reservoir capacitors, through which the switching stage output current must flow.
Convention Paper 8685 (Purchase now)

P1-3 Investigating the Benefit of Silicon Carbide for a Class D Power StageVerena Grifone Fuchs, University of Siegen - Siegen, Germany; CAMCO GmbH - Wenden, Germany; Carsten Wegner, University of Siegen - Siegen, Germany; CAMCO GmbH - Wenden, Germany; Sebastian Neuser, University of Siegen - Siegen, Germany; Dietmar Ehrhardt, University of Siegen - Siegen, Germany
This paper analyzes in which way silicon carbide transistors improve switching errors and loss associated with the power stage. A silicon carbide power stage and a conventional power stage with super-junction devices are compared in terms of switching behavior. Experimental results of switching transitions, delay times, and harmonic distortion as well as a theoretical evaluation are presented. Emending the imperfection of the power stage, silicon carbide transistors bring out high potential for Class D audio amplification.
Convention Paper 8686 (Purchase now)

P1-4 Efficiency Optimization of Class G Amplifiers: Impact of the Input SignalsPatrice Russo, Lyon Institute of Nanotechnology - Lyon, France; Gael Pillonnet, University of Lyon - Lyon, France; CPE dept; Nacer Abouchi, Lyon Institute of Nanotechnology - Lyon, France; Sophie Taupin, STMicroelectronics, Inc. - Grenoble, France; Frederic Goutti, STMicroelectronics, Inc. - Grenoble, France
Class G amplifiers are an effective solution to increase the audio efficiency for headphone applications, but realistic operating conditions have to be taken into account to predict and optimize power efficiency. In fact, power supply tracking, which is a key factor for high efficiency, is poorly optimized with the classical design method because the stimulus used is very different from a real audio signal. Here, a methodology has been proposed to find class G nominal conditions. By using relevant stimuli and nominal output power, the simulation and test of the class G amplifier are closer to the real conditions. Moreover, a novel simulator is used to quickly evaluate the efficiency with these long duration stimuli, i.e., ten seconds instead of a few milliseconds. This allows longer transient simulation for an accurate efficiency and audio quality evaluation by averaging the class G behavior. Based on this simulator, this paper indicates the limitations of the well-established test setup. Real efficiencies vary up to ±50% from the classical methods. Finally, the study underlines the need to use real audio signals to optimize the supply voltage tracking of class G amplifiers in order to achieve a maximal efficiency in nominal operation.
Convention Paper 8687 (Purchase now)


Friday, October 26, 9:00 am — 10:30 am (Room 133)


Tutorial: T1 - Noise on the Brain—Hearing Damage on the Other Side: Part II

Poppy Crum, Dolby Laboratories - San Francisco, CA, USA

Did you know that drinking a glass of orange juice every day may actually protect your hearing? Most discussions of hearing damage focus on what happens to the cochlea and inner ear. While this understanding is crucial to predicting and avoiding trauma that can lead to hearing loss, both acoustic and chemical stimuli can also have significant effects on higher brain areas. In some cases, thresholds and audiograms can look completely normal but listeners may have great difficulty hearing a conversation in a noisy environment. This session will explore the latest research regarding the effects of acoustic and chemical trauma, and how this damage manifests throughout the auditory pathway as changes in hearing sensitivity, cognition, and the experience of tinnitus. We will also consider recent research in chemically preserving hearing and combating these conditions with supplements as common as Vitamin C!


Friday, October 26, 9:00 am — 10:30 am (Room 122)

Paper Session: P2 - Networked Audio

Ellen Juhlin, Meyer Sound - Berkeley, CA, USA; AVnu Alliance

P2-1 Audio Latency Masking in Music Telepresence Using Artificial ReverberationRen Gang, University of Rochester - Rochester, NY, USA; Samarth Shivaswamy, University of Rochester - Rochester, NY, USA; Stephen Roessner, University of Rochester - Rochester, NY, USA; Akshay Rao, University of Rochester - Rochester, NY, USA; Dave Headlam, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
Network latency poses significant challenges in music telepresence systems designed to enable multiple musicians at different locations to perform together in real-time. Since each musician hears a delayed version of the performance from the other musicians it is difficult to maintain synchronization and there is a natural tendency for the musicians to slow their tempo while awaiting response from their fellow performers. We asked if the introduction of artificial reverberation can enable musicians to better tolerate latency by conducting experiments with performers where the degree of latency was controllable and for which artificial reverberation could be added or not. Both objective and subjective evaluation of ensemble performances were conducted to evaluate the perceptual responses at different experimental settings.
Convention Paper 8688 (Purchase now)

P2-2 Service Discovery Using Open Sound ControlAndrew Eales, Wellington Institute of Technology - Wellington, New Zealand; Rhodes University - Grahamstown, South Africa; Richard Foss, Rhodes University - Grahamstown, Eastern Cape, South Africa
The Open Sound Control (OSC) control protocol does not have service discovery capabilities. The approach to adding service discovery to OSC proposed in this paper uses the OSC address space to represent services within the context of a logical device model. This model allows services to be represented in a context-sensitive manner by relating parameters representing services to the logical organization of a device. Implementation of service discovery is done using standard OSC messages and requires that the OSC address space be designed to support these messages. This paper illustrates how these enhancements to OSC allow a device to advertise its services. Controller applications can then explore the device’s address space to discover services and retrieve the services required by the application.
Convention Paper 8689 (Purchase now)

P2-3 Flexilink: A Unified Low Latency Network Architecture for Multichannel Live AudioYonghao Wang, Birmingham City University - Birmingham, UK; John Grant, Nine Tiles Networks Ltd. - Cambridge, UK; Jeremy Foss, Birmingham City University - Birmingham, UK
The networking of live audio for professional applications typically uses layer 2-based solutions such as AES50 and MADI utilizing fixed time slots similar to Time Division Multiplexing (TDM). However, these solutions are not effective for best effort traffic where data traffic utilizes available bandwidth and is consequently subject to variations in QoS. There are audio networking methods such as AES47, which is based on asynchronous transfer mode (ATM), but ATM equipment is rarely available. Audio can also be sent over Internet Protocol (IP), but the size of the packet headers and the difficulty of keeping latency within acceptable limits make it unsuitable for many applications. In this paper we propose a new unified low latency network architecture that supports both time deterministic and best effort traffic toward full bandwidth utilization with high performance routing/switching. For live audio, this network architecture allows low latency as well as the flexibility to support multiplexing multiple channels with different sampling rates and word lengths.
Convention Paper 8690 (Purchase now)


Friday, October 26, 10:30 am — 12:00 pm (Room 120)

Live Sound Seminar: LS1 - Power for Live Events

Kenneth Fause, Auerbach Pollock Friedlander - San Francisco, CA, USA
Steve Bush, Meyer Sound Labs, Inc. - Berkeley, CA USA
Marc Kellom, Crown Audio, Inc. - Granger, IN, USA
Randall Venerable, CEO Generators Unlimited, Inc.

A discussion of power for live events where the panelists will focus on practical considerations. Power for modern amplifiers – how much is really needed? Power factor of loads – real power, reactive power, why this matters. Non-linear loads and harmonics. High-quality audio systems co-existing with theatrical lighting, rigging, video, catering and cooling – Electromagnetic Compatibility in the real world. Safety and regulatory issues.


Friday, October 26, 2:00 pm — 3:30 pm (Room 120)


Live Sound Seminar: LS2 - Live Sound Engineering: The Juxtaposition of Art and Science

Chuck Knowledge, Chucknology - San Francisco, CA, USA

In this presentation we examine the different disciplines required for modern live event production. The technical foundations of audio engineering are no longer enough to deliver the experiences demanded by today's concert goers. This session will discuss practical engineering challenges with consideration for the subjective nature of art and the desire of performing artists to push the technological envelope. Focal points will include:

• Transplanting studio production to the live arena.
• Computer-based solutions and infrastructure requirements.
• The symbiosis with lighting and video.
• New technologies for interactivity and audience engagement.
• New career paths made possible by innovation in these fields.

Attendees can expect insight to the delivery of recent high-profile live events, the relevant enabling technologies, and how to develop their own skill set to remain at the cutting edge.


Friday, October 26, 2:00 pm — 4:00 pm (Room 133)

Workshop: W3 - What Every Sound Engineer Should Know about the Voice

Eddy B. Brixen, EBB-consult - Smorum, Denmark
Henrik Kjelin, Complete Vocal Institute - Denmark
Cathrine Sadolin, Complete Vocal Institute - Denmark

The purpose of this workshop is to teach sound engineers how to listen to the voice before they even think of microphone picking and knob-turning. The presentation and demonstrations are based on the "Complete Vocal Technique" (CVT) where the fundamental is the classification of all human voice sounds into one of four vocal modes named Neutral, Curbing, Overdrive, and Edge. The classification is used by professional singers within all musical styles and has in a period of 20 years proved easy to grasp in both real life situations and also in auditive and visual tests (sound examples and laryngeal images/ Laryngograph waveforms). These vocal modes are found in the speaking voice as well. Cathrine Sadolin, the developer of CVT, will involve the audience in this workshop, while explaining and demonstrating how to work with the modes in practice to achieve any sound and solve many different voice problems like unintentional vocal breaks, too much or too little volume, hoarseness, and much more. The physical aspects of the voice will be explained and laryngograph waveforms and analyses will be demonstrated by Henrik Kjelin. Eddy Brixen will demonstrate measurements for the detection of the vocal modes and explain essential parameters in the recording chain, especially the microphone, to ensure reliable and natural recordings.


Friday, October 26, 3:00 pm — 4:30 pm (Foyer)

Poster: P7 - Amplifiers, Transducers, and Equipment

P7-1 Evaluation of trr Distorting Effects Reduction in DCI-NPC Multilevel Power Amplifiers by Using SiC Diodes and MOSFET TechnologiesVicent Sala, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; Tomas Resano, Jr., UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; MCIA Research Center; Jose Luis Romeral, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; Jose Manuel Moreno, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain
In the last decade, the Power Amplifier applications have used multilevel diode-clamped-inverter or neutral-point-clamped (DCI-NPC) topologies to present very low distortion at high power. In these applications a lot of research has been done in order to reduce the sources of distortion in the DCI-NPC topologies. One of the most important sources of distortion, and less studied, is the reverse recovery time (trr) of the clamp diodes and MOSFET parasitic diodes. Today, with the emergence of Silicon Carbide (SiC) technologies, these sources of distortion are minimized. This paper presents a comparative study and evaluation of the distortion generated by different combinations of diodes and MOSFETs with Si and SiC technologies in a DCI-NPC multilevel Power Amplifier in order to reduce the distortions generated by the non-idealities of the semiconductor devices.
Convention Paper 8720 (Purchase now)

P7-2 New Strategy to Minimize Dead-Time Distortion in DCI-NPC Power Amplifiers Using COE-Error InjectionTomas Resano, Jr., UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; MCIA Research Center; Vicent Sala, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; Jose Luis Romeral, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; Jose Manuel Moreno, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain
The DCI-NPC topology has become one of the best options to optimize energy efficiency in the world of high power and high quality amplifiers. This can use an analog PWM modulator that is sensitive to generate distortion or error, mainly for two reasons: Carriers Amplitude Error (CAE) and Carriers Offset Error (COE). Other main error and distortion sources in the system is the Dead-Time (td). This is necessary to guarantee the proper operation of the power amplifier stage so that errors and distortions originated by it are unavoidable. This work proposes a negative COE generation to minimize the distortion effects of td. Simulation and experimental results validates this strategy.
Convention Paper 8721 (Purchase now)

P7-3 Further Testing and Newer Methods in Evaluating Amplifiers for Induced Phase and Frequency Modulation via Tones, Amplitude Modulated Signals, and Pulsed WaveformsRonald Quan, Ron Quan Designs - Cupertino, CA, USA
This paper will present further investigations from AES Convention Paper 8194 that studied induced FM distortions in audio amplifiers. Amplitude modulated (AM) signals are used for investigating frequency shifts of the AM carrier signal with different modulation frequencies. A square-wave and sine-wave TIM test signal is used to evaluate FM distortions at the fundamental frequency and harmonics of the square-wave. Newer amplifiers are tested for FM distortion with a large level low frequency signal inducing FM distortion on a small level high frequency signal. In particular, amplifiers with low and higher open loop bandwidths are tested for differential phase and FM distortion as the frequency of the large level signal is increased from 1 KHz to 2 KHz.
Convention Paper 8722 (Purchase now)

P7-4 Coupling Lumped and Boundary Element Methods Using SuperpositionJoerg Panzer, R&D Team - Salgen, Germany
Both, the Lumped and the Boundary Element Method are powerful tools for simulating electroacoustic systems. Each one can have its preferred domain of application within one system to be modeled. For example the Lumped Element Method is practical for electronics, simple mechanics, and internal acoustics. The Boundary Element Method on the other hand enfolds its strength on acoustic-field calculations, such as diffraction, reflection, and radiation impedance problems. Coupling both methods allows to investigate the total system. This paper describes a method for fully coupling of the rigid body mode of the Lumped to the Boundary Element Method with the help of radiation self- and mutual radiation impedance components using the superposition principle. By this, the coupling approach features the convenient property of a high degree of independence of both domains. For example, one can modify parameters and even, to some extent, change the structure of the lumped-element network without the necessity to resolve the boundary element system. This paper gives the mathematical derivation and a demonstration-example, which compares calculation results versus measurement. In this example electronics and mechanics of the three involved loudspeakers are modeled with the help of the lumped element method. Waveguide, enclosure and radiation is modeled with the boundary element method.
Convention Paper 8723 (Purchase now)

P7-5 Study of the Interaction between Radiating Systems in a Coaxial LoudspeakerAlejandro Espi, Acústica Beyma - Valencia, Spain; William A. Cárdenas, Sr., University of Alicante - Alicante, Spain; Jose Martinez, Acustica Beyma S.L. - Moncada (Valencia), Spain; Jaime Ramis, University of Alicante - Alicante, Spain; Jesus Carbajo, University of Alicante - Alicante, Spain
In this work the procedure followed to study the interaction between the mid and high frequency radiating systems of a coaxial loudspeaker is explained. For this purpose a numerical Finite Element model was implemented. In order to fit the model, an experimental prototype was built and a set of experimental measurements, electrical impedance, and pressure frequency response in an anechoic plane wave tube among these, were carried out. So as to take into account the displacement dependent nonlinearities, a different input voltage parametric analysis was performed and internal acoustic impedance was computed numerically in the frequency domain for specific phase plug geometries. Through inversely transforming to a time differential equation scheme, a lumped element equivalent circuit to evaluate the mutual acoustic load effect present in this type of acoustic coupled systems was obtained. Additionally, the crossover frequency range was analyzed using the Near Field Acoustic Holography technique.
Convention Paper 8724 (Purchase now)

P7-6 Flexible Acoustic Transducer from Dielectric-Compound Elastomer FilmTakehiro Sugimoto, NHK Science & Technology Research Laboratories - Setagaya-ku, Tokyo, Japan; Tokyo Institute of Technology - Midori-ku, Yokohama, Japan; Kazuho Ono, NHK Science & Technology Research Laboratories - Setagaya-ku, Tokyo, Japan; Akio Ando, NHK Science & Technology Research Laboratories - Setagaya-ku, Tokyo, Japan; Hiroyuki Okubo, NHK Science & Technology Research Laboratories - Setagaya-ku, Tokyo, Japan; Kentaro Nakamura, Tokyo Institute of Technology - Midori-ku, Yokohama, Japan
To increase the sound pressure level of a flexible acoustic transducer from a dielectric elastomer film, this paper proposes compounding various kinds of dielectrics into a polyurethane elastomer, which is the base material of the transducer. The studied dielectric elastomer film utilizes a change in side length derived from the electrostriction for sound generation. The proposed method was conceived from the fact that the amount of dimensional change depends on the relative dielectric constant of the elastomer. Acoustical measurements demonstrated that the proposed method was effective because the sound pressure level increased by 6 dB at the maximum.
Convention Paper 8725 (Purchase now)

P7-7 A Digitally Driven Speaker System Using Direct Digital Spread Spectrum Technology to Reduce EMI NoiseMasayuki Yashiro, Hosei University - Koganei, Tokyo, Japan; Mitsuhiro Iwaide, Hosei University - Koganei, Tokyo, Japan; Akira Yasuda, Hosei University - Koganei, Tokyo, Japan; Michitaka Yoshino, Hosei University - Koganei, Tokyo, Japan; Kazuyki Yokota, Hosei University - Koganei, Tokyo, Japan; Yugo Moriyasu, Hosei University - Koganei, Tokyo, Japan; Kenji Sakuda, Hosei University - Koganei, Tokyo, Japan; Fumiaki Nakashima, Hosei University - Koganei, Tokyo, Japan
In this paper a novel digital direct-driven speaker for reducing electromagnetic interference incorporating a spread spectrum clock generator is proposed. The driving signal of a loudspeaker, which has a large spectrum at specific frequency, interferes with nearby equipment because the driving signal emits electromagnetic waves. The proposed method changes two clock frequencies according to the clock selection signal generated by a pseudo-noise circuit. The noise performance deterioration caused by the clock frequency switching can be reduced by the proposed modified delta-sigma modulator, which changes coefficients of the DSM according to the width of the clock period. The proposed method can reduce out-of-band noise by 10 dB compared to the conventional method.
Convention Paper 8726 (Purchase now)

P7-8 Automatic Speaker Delay Adjustment System Using Wireless Audio Capability of ZigBee NetworksJaeho Choi, Seoul National University - Seoul, Korea; Myoung woo Nam, Seoul National University - Seoul, Korea; Kyogu Lee, Seoul National University - Seoul, Korea
IEEE 802.15.4 (ZigBee) standard is a low data rate, low power consumption, low cost, flexible network system that uses wireless networking protocol for automation and remote control applications. This paper applied these characteristics on the wireless speaker delay compensation system in a large venue (over 500-seat hall). Traditionally delay adjustment has been manually done by sound engineers, but our suggested system will be able to analyze delayed-sound of front speaker to rear speaker automatically and apply appropriate delay time to rear speakers. This paper investigates the feasibility of adjusting the wireless speaker delay over the above-mentioned ZigBee network. We present an implementation of a ZigBee audio transmision and LBS (Location-Based Service) application that allows to calculation a speaker delay time.
Convention Paper 8727 (Purchase now)

P7-9 A Second-Order Soundfield Microphone with Improved Polar Pattern ShapeEric M. Benjamin, Surround Research - Pacifica, CA, USA
The soundfield microphone is a compact tetrahedral array of four figure-of-eight microphones yielding four coincident virtual microphones; one omnidirectional and three orthogonal pressure gradient microphones. As described by Gerzon, above a limiting frequency approximated by fc = pc/r, the virtual microphones become progressively contaminated by higher-order spherical harmonics. To improve the high-frequency performance, either the array size must be substantially reduced or a new array geometry must be found. In the present work an array having nominally octahedral geometry is described. It samples the spherical harmonics in a natural way and yields horizontal virtual microphones up to second order having excellent horizontal polar patterns up to 20 kHz.
Convention Paper 8728 (Purchase now)

P7-10 Period Deviation Tolerance Templates: A Novel Approach to Evaluation and Specification of Self-Synchronizing Audio ConvertersFrancis Legray, Dolphin Integration - Meylan, France; Thierry Heeb, Digimath - Sainte-Croix, Switzerland; SUPSI, ICIMSI - Manno, Switzerland; Sebastien Genevey, Dolphin Integration - Meylan, France; Hugo Kuo, Dolphin Integration - Meylan, France
Self-synchronizing converters represent an elegant and cost effective solution for audio functionality integration into SoC (System-on-Chip) as they integrate both conversion and clock synchronization functionalities. Audio performance of such converters is, however, very dependent on the jitter rejection capabilities of the synchronization system. A methodology based on two period deviation tolerance templates is described for evaluating such synchronization solutions, prior to any silicon measurements. It is also a unique way for specifying expected performance of a synchronization system in the presence of jitter on the audio interface. The proposed methodology is applied to a self-synchronizing audio converter and its advantages are illustrated by both simulation and measurement results.
Convention Paper 8729 (Purchase now)

P7-11 Loudspeaker Localization Based on Audio WatermarkingFlorian Kolbeck, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Giovanni Del Galdo, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Iwona Sobieraj, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Tobias Bliem, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Localizing the positions of loudspeakers can be useful in a variety of applications, above all the calibration of a home theater setup. For this aim, several existing approaches employ a microphone array and specifically designed signals to be played back by the loudspeakers, such as sine sweeps or maximum length sequences. While these systems achieve good localization accuracy, they are unsuitable for those applications in which the end-user should not be made aware that the localization is taking place. This contribution proposes a system that fulfills these requirements by employing an inaudible watermark to carry out the localization. The watermark is specifically designed to work in reverberant environments. Results from realistic simulations confirm the practicability of the proposed system.
Convention Paper 8730 (Purchase now)


Friday, October 26, 4:00 pm — 5:30 pm (Room 120)

Live Sound Seminar: LS3 - Practical Application of Audio Networking for Live Sound

Kevin Kimmel, Yamaha Commercial Audio - Fullerton, CA, USA
Steve Seable, Yamaha Commercial Audio - Fullerton, CA, USA
Steve Smoot, Yamaha Commercial Audio - Fullerton, CA, USA
Kieran Walsh, Audinate Pty. Ltd. - Ultimo, NSW, Australia

This panel will focus on the use of several audio networking technologies, including A-Net, Dante, EtherSound, Cobranet, Optocore, Rocknet, and AVnu AVB and their deployment in live sound applications. Network panelists will be industry professionals who have experience working with various network formats.


Friday, October 26, 8:00 pm — 9:00 pm (Off-Site 1)


Special Event: Organ Concert
8:00 pm - 9:00 pm

Graham Blyth, Wantage, Oxfordshire, UK

This year's concert is being held at St. Mark's Lutheran Church. The organ is a 2-manual and pedal Taylor & Boody, with a stop-list chosen to represent the North German Organs of the Baroque period.

The program is going to be all Bach and will include the Dorian Toccata BWV538, the Six Schubler Chorales, selected Chorale Preludes from the Eighteen Chorales, Vivaldi/Bach Concerto in A minor, and the Prelude & Fugue in CBWV545.

This is a FREE event and is first-come, first-served.


Saturday, October 27, 9:00 am — 11:00 am (Room 132)


Tutorial: T4 - Small Room Acoustics

Ben Kok, BEN KOK acoustic consulting - Uden, Netherlands

Acoustic basics of small rooms will be discussed. Specific issues related to the size of the room (room-modes) will be addressed. Absorption, reflection, diffraction, diffusion, and how to use it, as well as specific aspects regarding low frequency treatment will be discussed.

Although this will not be a studio design class, specifics and differences of recording rooms and control rooms will be identified, including considerations for loudspeaker and microphone placement.


Saturday, October 27, 9:00 am — 11:00 am (Room 133)

Workshop: W4 - What Does an Object Sound Like? Toward a Common Definition of a Spatial Audio Object

Frank Melchior, BBC R&D - Salford, UK
Jürgen Herre, International Audio Laboratories Erlangen - Erlangen, Germany; Fraunhofer IIS - Erlangen, Germany
Jean-Marc Jot, DTS, Inc. - Calabasas, CA, USA
Nicolas Tsingos, Dolby Labs - San Francisco, CA, USA
Matte Wagner, Red Storm Entertainment - Cary, NC, USA
Hagen Wierstorf, Technische Universität Berlin - Berlin, Germany

At the present time, several concepts for the storage of spatial audio data are under discussion in the research community. Besides the distribution of audio signals corresponding to a specific speaker layout or encoding a spatial audio scene in orthogonal basis functions like spherical harmonics, several solutions available on the market are applying object-based formats to store and distribute spatial audio scenes. The workshop will cover the similarities and difference between the various concepts of audio objects. This comparison will include the production and reproduction of audio objects as well as their storage. The panelists will try to find a common definition of audio objects in order to enable an object-based exchange format in the future.


Saturday, October 27, 9:30 am — 12:00 pm (Tech Tours)

Technical Tour: TT3 - Fenix

The latest addition to San Rafael’s night life scene, this new, 150 seat, 8600-square-foot performing venue combines innovative food and drinks with outstanding acoustics and a Meyer's speaker system, for a remarkable live music experience. Behind the scenes is their state-of-the art studio production for recording and streaming live shows to the internet. Architect/acoustician John Storyk, whose Walters-Storyk Design Group designed the club, will be leading part of the tour. (

This event is limited to 44 tickets.

Technical Tours are made available on a first come, first served basis. Tickets can be purchased during normal registration hours at the convention center.

Price: Members $40/Nonmembers $50


Saturday, October 27, 10:15 am — 11:45 am (Room 120)

Live Sound Seminar: LS4 - Technical and Practical Considerations for Wireless Microphone System Designers and Users

Karl Winkler, Lectrosonics - Rio Rancho, NM, USA
Joe Ciaudelli, Sennheiser Electronic Corporation - Old Lyme, CT, USA
Gino Sigismondi, Shure, Inc. - Niles, IL, USA
Tom Turkington, CoachComm LLC - Auburn, AL, USA

Wireless microphone users and system designers encounter many of the same issues when setting up and using these systems, whether for house of worship, touring music, theater, corporate AV, or TV/video production. Channel counts from 1 to 24 are within the scope of this discussion. Topics will include RF Spectrum allocation, "safe haven" channels, TVBD database registration, frequency coordination, system design, transmitter and receiver antenna placement, and emerging wireless microphone technologies.


Saturday, October 27, 10:45 am — 12:15 pm (Room 131)

Broadcast and Streaming Media: B6 - Audio for Mobile Television

Brad Dick, Broadcast Engineering Magazine - Kansas City, MO, USA
Tim Carroll, Linear Acoustic Inc. - Lancaster, PA, USA
David Layer, National Association of Broadcasters - Washington, DC, USA
Robert Murch, Fox Television
Geir Skaaden, DTS, Inc.
Jim Starzynski, NBC Universal - New York, NY, USA
Dave Wilson, CEA - Arlington, VA, USA

Many TV stations recognize mobile DTV as a new and great financial opportunity. By simply simulcasting their main channel an entirely new revenue stream can be developed. But according to audio professionals, TV audio engineers should consider carefully the additional audio processing required to ensure proper loudness and intelligibility in a mobile device’s typically noisy environment. The proper solution may be more complex that just reducing dynamic range or adding high-pass filtering.

This panel of audio experts will provide in-depth guidance on steps that may be taken to maximize the performance of your mobile DTV channel.


Saturday, October 27, 2:00 pm — 3:30 pm (Room 123)


Network Audio: N2 - Open IP Protocols for Audio Networking

Kevin Gross, AVA Networks - Boulder, CO, USA

The networking and telecommunication industry has its own set of network protocols for carriage of audio and video over IP networks. These protocols have been widely deployed for telephony and teleconferencing applications, internet streaming, and cable television. This tutorial will acquaint attendees with these protocols and their capabilities and limitations. The relationship to AVB protocols will be discussed.

Specifically, attendees will learn about Internet protocol (IP), voice over IP (VoIP), IP television (IPTV), HTTP streaming, real-time transport protocol (RTP), real-time transport control protocol (RTCP), real-time streaming protocol (RTSP), session initiation protocol (SIP), session description protocol (SDP), Bonjour, session announcement protocol (SAP), differentiated services (DiffServ), and IEEE 1588 precision time protocol (PTP)

An overview of AES standards work, X192, adapting these protocols to high-performance audio applications will be given.


Saturday, October 27, 2:00 pm — 3:30 pm (Room 132)

Live Sound Seminar: LS5 - Planning a Live Sound Education: Should I Study the Theory, or Practice the Skills?

Ted Leamy, Promedia-Ultrasound
Kevin W. Becka, Conservatory of Recording Arts and Sciences/Mix Magazine - Gilbert, AZ, USA
Michael Jackson, Independent Live Sound Engineer - Richmond, CA, USA
David Scheirman, Harman Professional - Northridge, CA, USA

Getting the correct education to prepare for a career in live audio is a challenging proposition. Which is better—formal education or hands-on experience? The choices being marketed for "education" today seem endless. Manufacturer trainings, school programs, product user certifications, continuing "education" credits, CTS technician exams, smart training, stupid training, "learn to be a genius" seminars …. Who you gonna call? Is it better to just go on the road and "earn while you learn"? Join your peers and some industry veterans for a lively investigation of the options you face when learning about live sound career choices. Hear what works, and what doesn’t. Come and discuss your own experiences … good, bad, or otherwise. This will be an open discussion with all participants. Educators, students, business owners, engineers, and anyone involved in live sound should come and be heard at this unique session.


Saturday, October 27, 4:00 pm — 5:00 pm (Proj St EX)


Project Studio Expo: PSE6 - Take Your Studio On Stage
Live Performance with Laptops, Looping Pedals, & Other Studio Techniques

Craig Anderton, Harmony Central / Electronic Musician - Santa Fe, NM, USA

Take Your Studio On Stage: Live Performance with Laptops, Looping Pedals, & Other Studio Techniques

For many musicians, as well as DJs and electronic acts, a 21st century live performance requires much more than just a mixer and a bunch of amps. This workshop takes a practical look at how to use technology on stage without being overwhelmed by it, ways to insure a smooth performance, and includes invaluable information on the “care and feeding” of laptops to insure optimum performance—and uninterrupted performances. Other topics include using controllers for a more vibrant live performance, performing with Ableton Live and dedicated control surfaces, improvisation with looping pedals and DAW software, and the evolution of DJ controller/laptop combinations into tools for a musical, complex new art form.


Saturday, October 27, 4:00 pm — 6:00 pm (Room 120)

Live Sound Seminar: LS6 - Wireless Frequency Wrangling at Large Events

Bob Green, Audio-Technica U.S., Inc.
Dave Bellamy, Soundtronics
Steve Caldwell, Norwest Productions - Loftus, NSW, Australia
Henry Cohen, Production Radio - White Plains, NY USA
Chris Dodds, The P.A. People - Rhodes NSW, Australia
Pete Erskine, Freelancer - Brooklyn, NY, USA
Jason Eskew, Professional Wireless Systems - Florida, USA
Larry Estrin, Clear-Com - Alameda, CA, USA; Audio Technica - Stow, OH, USA

This session will cover the overall approach to RF coordination, preparation, and setup for a large event. Topics that will be discussed will include microphones, in-ear monitoring, and production communications with specific focus on antenna systems, cabling, RF environment isolation, monitoring, verifying operation, and procedures.

The members of the panel will bring to the table their experience (national and international) in events as diverse as Olympic Games Ceremonies, Super Bowls, Presidential Debates, Grammy Awards, and Eurovision contests.


Sunday, October 28, 9:00 am — 10:30 am (Room 123)

Network Audio: N4 - AVnu – The Unified AV Network: Overview and Panel Discussion

Rob Silfvast, Avid - Mountain View, CA, USA
Ellen Juhlin, Meyer Sound - Berkeley, CA, USA; AVnu Alliance
Denis Labrecque, Analog Devices - San Jose, CA, USA
Lee Minich, Lab X Technologies - Rochester, NY, USA
Bill Murphy, Extreme Networks
Michael Johas Teener, Broadcom - Santa Cruz, CA, USA

This session will provide an overview of the AVnu Alliance, a consortium of audio and video product makers and core technology companies committed to delivering an interoperable open standard for audio/video networked connectivity built upon IEEE Audio Video Bridging standards. AVnu offers a logo-testing program that allows products to become certified for interoperability, much like the Wi-Fi Alliance provides for the IEEE 802.11 family of standards. Representatives from several different member companies will speak in this panel discussion and provide insights about AVB technology and participation in the AVnu Alliance.


Sunday, October 28, 10:45 am — 11:45 am (Room 122)


Historical: H3 - Lee de Forest: The Man Who Invented the Amplifier

Mike Adams, San Jose State University - San Jose, CA, USA

After Lee de Forest received his PhD in physics and electricity from Yale University in 1899, he spent the next 30 years turning the 19th Century science he learned into the popular audio media of the 20th century. First he added sound to the wireless telegraph of Marconi and created a radiotelephone system. Next, he invented the triode vacuum tube by adding a control grid to Fleming’s two-element diode tube, creating the three-element vacuum tube used as an audio amplifier and oscillator for radio wave generation. Using his tube and building on the earlier work of Ruhmer and Bell, he created a variable density sound-on-film process, patented it, and began working with fellow inventor Theodore Case. In order to promote and demonstrate his process he made hundreds of short sound films, found theaters for their showing, and issued publicity to gain audiences for his invention. While de Forest did not profit from sound-on-film, it was his earlier invention of the three-element vacuum tube that allowed amplification of audio through loudspeakers for radio and the movies that finally helped create their large public audiences.


Sunday, October 28, 11:00 am — 12:30 pm (Room 130)

Live Sound Seminar: LS7 - The Women of Professional Concert Sound

Terri Winston, Women's Audio Mission - San Francisco, CA, USA
Claudia Engelhart, FOH engineer for Bill Frisell, Herbie Hancock, Wayne Shorter
Deanne Franklin, FOH engineer for Tom Waits, David Byrne, Pink
Karrie Keyes, Monitor engineer for Pearl Jam, Red Hot Chili Peppers, Sonic Youth
Jeri Palumbo, Live Television production engineer, Super Bowl, The Oscars, Tonight Show w/Jay Leno
Michelle Sabolchick Pettinato, FOH engineer for Gwen Stefani, Jewel, Melissa Etheridge

This all-star panel of live sound engineers averages over 250 days a year mixing the biggest name acts in the business. Drawing from their experience running sound in arenas and large venues all over the world, these women will share their tips and tricks, from using EQ and learning the problematic frequencies of instruments to choosing the best outboard gear, and the systems typically used. This panel will explore what a day on a major world tour looks like, how to adjust to the acoustics of different venues, the difference between the positions of FOH and monitors, and how to successfully manage a life of constant touring.


Sunday, October 28, 11:00 am — 1:00 pm (Room 132)


Tutorial: T8 - An Overview of Audio System Grounding and Interfacing

William E. Whitlock, Jensen Transformers, Inc. - Chatsworth, CA, USA; Whitlock Consulting - Oxnard, CA, USA

Equipment makers like to pretend the problems don’t exist, but this tutorial replaces hype and myth with insight and knowledge, revealing the true causes of system noise and ground loops. Unbalanced interfaces are exquisitely vulnerable to noise due to an intrinsic problem. Although balanced interfaces are theoretically noise-free, they’re widely misunderstood by equipment designers, which often results in inadequate noise rejection in real-world systems. Because of a widespread design error, some equipment has a built-in noise problem. Simple, no-test-equipment, troubleshooting methods can pinpoint the location and cause of system noise. Ground isolators in the signal path solve the fundamental noise coupling problems. Also discussed are unbalanced to balanced connections, RF interference, and power line treatments. Some widely used “cures” are both illegal and deadly.


Sunday, October 28, 2:00 pm — 4:00 pm (Room 132)


Tutorial: T9 - Large Room Acoustics

Diemer de Vries, RWTH Aachen University - Aachen, Germany; TU Delft - Delft, Netherlands

In this tutorial, the traditional and modern ways to describe the acoustical properties of "large" rooms—having dimensions large in comparison to the average wavelength of the relevant frequencies of the speech or music to be (re-)produced—will be discussed. Theoretical models, measurement techniques, the link between objective data and the human perception will be discussed. Is it the reverberation time, or the impulse response, or is there more to take into account to come to a good assessment?


Sunday, October 28, 2:30 pm — 4:30 pm (Room 120)

Live Sound Seminar: LS8 - Tuning a Loudspeaker Installation

Jamie Anderson, Rational Acoustics - Putnam, CT, USA
David Gunness, Fulcrum Acoustic - Sutton, MA, USA
Deward Timothy, Poll Sound - Salt Lake Cty, UT, USA

Loudspeaker systems are installed to achieve functional and aesthetic goals. Therefore, the act of tuning (aligning) those systems are the process of / attempt at achieving those aims. While often equated with simply the adjustment of a system’s drive EQ / DSP, loudspeaker system alignment truly encompasses the sum total of the series of decisions (or non-decisions) made throughout the design, installation, drive adjustment, and usage processes. This session gathers a panel of audio professionals with extensive experience in sound system alignment over a diverse variety of system types and applications to discuss their processes, priorities, and the critical elements that make their alignment goals achievable (or not). Given finite, and often extremely limited, resources (equipment, time, money, labor, space, access, authority) this session asks its panelists what is necessary to successfully tune a loudspeaker installation.


Sunday, October 28, 3:00 pm — 4:30 pm (Room 123)

Network Audio: N7 - Audio Network Device Connection and Control

Richard Foss, Rhodes University - Grahamstown, Eastern Cape, South Africa
Jeffrey Alan Berryman, Bosch Communications - Flesherton, ON, Canada
Andreas Hildebrand, ALC NetworX - Munich, Germany
Jeff Koftinoff, Meyer Sound Canada - Vernon, BC, Canada
Kieran Walsh, Audinate Pty. Ltd. - Ultimo, NSW, Australia

In this session a number of industry experts will describe and demonstrate how they have enabled the discovery of audio devices on local area networks, their subsequent connection management, and also control over their various parameters. The workshop will start with a panel discussion that introduces issues related to streaming audio, such as bandwidth management and synchronization, as well as protocols that enable connection management and control. The panelists will have demonstrations of their particular audio network solutions. They will describe these solutions as part of the panel discussion, and will provide closer demonstrations following the panel discussion.


Sunday, October 28, 4:30 pm — 6:00 pm (Room 120)


Live Sound Seminar: LS9 - Acoustics for Small Live Sound Venues—Creating (& Fine Tuning) the Consummate Performing/Listening Environment

John Storyk, Walters-Storyk Design Group - Highland, NY, USA

John Storyk has designed acoustics for a number of live performance venues. These range from the Jazz at Lincoln Center Complex to the successful NYC club Le Poisson Rouge and to the Fenix Club, a brand new venue in San Rafael, which will be on the Tech Tour Schedule. John will give an illuminating presentation on improving acoustics in existing performance venues AND designing acoustics for new venues that would address potential absorption and reflection and other sound-related issues prior to construction. He also just completed the acoustics for a new NYC club called 54 Below (below the Studio 54 theater). Some VERY innovative acoustic solutions were developed for that venue.


Monday, October 29, 9:00 am — 10:00 am (Room 130)


Product Design: PD10 - Ethernet Standard Audio

Stephen Lampen, Belden - San Francisco, CA, USA

Ethernet has been around since 1973, and with the rise of twisted-pair-based Ethernet there have been many companies who played around to get Ethernet to work for multichannel audio. The problem is that all of their solutions were proprietary and not always compatible between manufacturers. This was the impetus behind IEEE 802.1BA AVB, a re-write of the Ethernet standard to include many bells and whistles for audio and video applications. This presentation will show AVB switches, how they are different, and what is in this new standard.


Monday, October 29, 9:00 am — 10:30 am (Room 120)

Live Sound Seminar: LS10 - Live Sound for Corporate Events: It's Business Time

Michael (Bink) Knowles, Freelance Engineer - Oakland, CA, USA
Steve Ratcliff, Freelance Engineer
Scott Urie, Pinnacle Consulting Group

Corporate events demand audio perfection, even when the video screens and lighting plot take precedence over loudspeaker placement. Signal flow and mixing can be very complex, with distant parties and panel discussions creating feedback challenges. Mixing, routing, and arraying strategies will be discussed using example cases.


Monday, October 29, 10:15 am — 12:15 pm (Room 130)


Product Design: PD11 - Rub & Buzz and Other Irregular Loudspeaker Distortion

Wolfgang Klippel, Klippel GmbH - Dresden, Germany

Loudspeaker defects caused by manufacturing, aging, overload, or climate impact generate a special kind of irregular distortion commonly known as rub & buzz that are highly audible and intolerable for the human ear. Contrary to regular loudspeaker distortions defined in the design process the irregular distortions are hardly predictable and are generated by an independent process triggered by the input signal. Traditional distortion measurements such as THD fail in the reliable detection of those defects. The Tutorial discusses the most important defect classes, new measurement techniques, audibility, and the impact on perceived sound quality.


Monday, October 29, 10:45 am — 12:15 pm (Room 131)

Tutorial: T12 - Sound System Intelligibility

Ben Kok, BEN KOK acoustic consulting - Uden, Netherlands
Peter Mapp, Peter Mapp Associates - Colchester, Essex, UK

The tutorial will discuss the background to speech intelligibility and its measurement, how room acoustics can potentially affect intelligibility, and what measures can be taken to optimize the intelligibility of a sound system. Practical examples of real world problems and solutions will be given based the wide experience of the two presenters.


Monday, October 29, 11:00 am — 12:30 pm (Room 120)

Live Sound Seminar: LS11 - Audio DSP in Unreal-Time, Real-Time, and Live Settings

Robert Bristow-Johnson, audioImagination - Burlington, VT, USA
Kevin Gross, AVA Networks - Boulder, CO, USA

In audio DSP we generally worry about two problem areas: (1) the Algorithm: what we're trying to accomplish with the sound and the mathematics for doing it; and (2) Housekeeping: the "guzzintas" and the "guzzoutas," and other overhead. On the other hand is the audio processing (or synthesis) setting which might be divided into three classes: (1) Non-real-time processing of sound files; (2) Real-time processing of a stream of samples; (3) Live processing of audio. The latter is more restrictive than the former. We'll get a handle on defining what is real-time and what is not, what is live and what is not. What are the essential differences? We'll discuss how the setting affects how the algorithm and housekeeping might be done. And we'll look into some common techniques and less common tricks that might assist in getting non-real-time algorithms to act in a real-time context and to get *parts* of a non-live real-time algorithm to work in a live setting.


Monday, October 29, 2:00 pm — 4:00 pm (Room 120)

Live Sound Seminar: LS12 - The Art of Sound for Live Jazz Performances

Lee Brenkman
Mitch Grant, Sound Engineer for jazz events in the San Diego area
Nick Malgieri, Mixed the past several Monterey Jazz Festivals

A discussion of the sonic presentation of America's native musical art form in settings ranging from the smallest cafés and night clubs to the largest outdoor festivals. In particular the panelists will focus on how sound reinforcement for this musical form can differ from other genres.


Return to Live Sound Track Events

EXHIBITION HOURS October 27th 10am – 6pm October 28th 10am – 6pm October 29th 10am – 4pm
REGISTRATION DESK October 25th 3pm – 7pm October 26th 8am – 6pm October 27th 8am – 6pm October 28th 8am – 6pm October 29th 8am – 4pm
TECHNICAL PROGRAM October 26th 9am – 7pm October 27th 9am – 7pm October 28th 9am – 7pm October 29th 9am – 5pm
AES - Audio Engineering Society