AES New York 2013
Live Sound Track Event Details

Wednesday, October 16, 5:00 pm — 7:00 pm (Room 1E10)

Workshop: W21 - Lies, Damn Lies, and Audio Gear Specs

Ethan Winer, RealTraps - New Milford, CT, USA
Scott Dorsey, Williamsburg, VA, USA
David Moran, Boston Audio Society - Wayland, MA, USA
Mike Rivers, Gypsy Studio - Falls Church, VA, USA

The fidelity of audio devices is easily measured, yet vendors and magazine reviewers often omit important details. For example, a loudspeaker review will state the size of the woofer but not the low frequency cut-off. Or the cut-off frequency is given, but without stating how many dB down or the rate at which the response rolls off below that frequency. Or it will state distortion for the power amps in a powered monitor but not the distortion of the speakers themselves, which of course is what really matters. This workshop therefore defines a list of standards that manufacturers and reviewers should follow when describing the fidelity of audio products. It will also explain why measurements are a better way to assess fidelity than listening alone.

Excerpts from this workshop are available on YouTube.


Thursday, October 17, 9:00 am — 11:00 am (Room 1E12)

Live Sound Seminar: LS1 - AC Power and Grounding

Bruce C. Olson, Olson Sound Design - Brooklyn Park, MN, USA; Ahnert Feistel Media Group - Berlin, Germany
Bill Whitlock, Jensen Transformers, Inc. - Chatsworth, CA, USA; Whitlock Consulting - Oxnard, CA, USA

There is a lot of misinformation about what is needed for AC power for events. Much of it has to do with life-threatening advice. This panel will discuss how to provide AC power properly and safely and without causing noise problems. This session will cover power for small to large systems, from a couple boxes on sticks up to multiple stages in ballrooms, road houses, and event centers; large scale installed systems, including multiple transformers and company switches, service types, generator sets, 1ph, 3ph, 240/120 208/120. Get the latest information on grounding and typical configurations by this panel of industry veterans.


Thursday, October 17, 9:00 am — 12:00 pm (Room 1E09)

Paper Session: P2 - Signal Processing—Part 1

Jaeyong Cho, Samsung Electronics DMC R&D Center - Suwon, Korea

P2-1 Linear Phase Implementation in Loudspeaker Systems: Measurements, Processing Methods, and Application BenefitsRémi Vaucher, NEXO - Plailly, France
The aim of this paper is to present a new generation of EQ. It provides a way to ensure phase compatibility from 20 Hz to 20 kHz over a range of different speaker cabinets. This method is based on a mix of FIR filters and IIR filters. The use of FIR filters allows a tuning of the phase independently from magnitude and allows an acoustic linear phase above 500 Hz. All targets used to compute FIR coefficient are based upon extensive measurement and subjective listening tests. A template has been set to normalize the crossover frequencies in the low range, enabling compatibility of every sub-bass with the main cabinets.
Convention Paper 8926 (Purchase now)

P2-2 Applications of Inverse Filtering to the Optimization of Professional Loudspeaker SystemsDaniele Ponteggia, Studio Ponteggia - Terni (TR), Italy; Mario Di Cola, Audio Labs Systems - Casoli (CH), Italy
The application of FIR filter technology to implement Inverse Filtering into a Professional Loudspeakers Systems nowadays is easier and more affordable because of the latest development of DSP technology and also because of the existence of a new DSP platform dedicated to the end user. This paper presents an analysis, based on real world examples, of a possible methodology that can be used in order to synthesize an appropriate Inverse Filter both to process a single driver, from a Time Domain perspective in a multi-way system, and to process the output pass-band from of a multi-way system for phase linearization. The analysis and discussion of results for some applications will be shown through real world test and measurements.
Convention Paper 8927 (Purchase now)

P2-3 Live Event Performer Tracking for Digital Console Automation Using Industry-Standard Wireless Microphone SystemsAdam J. Hill, University of Derby - Derby, Derbyshire, UK; Kristian "Kit" Lane, University of Derby - Derby, UK; Adam P. Rosenthal, Gand Concert Sound - Elk Grove Village, IL, USA; Gary Gand, Gand Concert Sound - Elk Grove Village, IL, USA
The ever-increasing popularity of digital consoles for audio and lighting at live events provides a greatly expanded set of possibilities regarding automation. This research works toward a solution for performer tracking using wireless microphone signals that operates within the existing infrastructure at professional events. Principles of navigation technology such as received signal strength (RSS), time difference of arrival (TDOA), angle of arrival (AOA), and frequency difference of arrival (FDOA) are investigated to determine their suitability and practicality for use in such applications. Analysis of potential systems indicates that performer tracking is feasible over the width and depth of a stage using only two antennas with a suitable configuration, but limitations of current technology restrict the practicality of such a system.
Convention Paper 8928 (Purchase now)

P2-4 Real-Time Simulation of a Family of Fractional-Order Low-Pass FiltersThomas Hélie, IRCAM-CNRS UMR 9912-UPMC - Paris, France
This paper presents a family of low-pass filters, the attenuation of which can be continuously adjusted from 0 decibel per octave (filter is a unit gain) to -6 decibels per octave (standard one-pole filter). This continuum is produced through a filter of fractional-order between 0 (unit gain) and 1 (one-pole filter). Such a filter proves to be a (continuous) infinite combination of one-pole filters. Efficient approximations are proposed from which simulations in the time-domain are built.
Convention Paper 8929 (Purchase now)

P2-5 A Computationally Efficient Behavioral Model of the Nonlinear DevicesJaeyong Cho, Samsung Electronics DMC R&D Center - Suwon, Korea; Hanki Kim, Samsung Electronics DMC R&D Center - Suwon, Korea; Seungkwan Yu, Samsung Electronics DMC R&D Center - Suwon, Korea; Haekwang Park, Samsung Electronics DMC R&D Center - Suwon, Korea; Youngoo Yang, Sungkyunkwan University - Suwon, Korea
This paper presents a new computationally efficient behavioral model to reproduce the output signal of the nonlinear devices for the real-time systems. The proposed model is designed using the memory gain structure and verified for its accuracy and computational complexity compared to other nonlinear models. The model parameters are extracted from a vacuum tube amplifier, Heathkit’s W-5M, using the exponentially-swept sinusoidal signal. The experimental results show that the proposed model has 27% of the computational load against the generalized Hammerstein model and maintains similar modeling accuracy.
Convention Paper 8930 (Purchase now)

P2-6 High-Precision Score-Based Audio Indexing Using Hierarchical Dynamic Time WarpingXiang Zhou, Bose Corporation - Framingham, MA, USA; Fangyu Ke, University of Rochester - Rochester, NY, USA; Cheng Shu, University of Rochester - Rochester, NY, USA; Gang Ren, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
We propose a novel audio signal processing algorithm of high-precision score-based audio indexing that accurately maps a music score with its corresponding audio. Specifically we improve the time precision of existing score-audio alignment algorithms to find the accurate positions of audio onsets and offsets. We achieve higher time precision by (1) improving the resolution of alignment sequences, and (2) admitting a hierarchy of spectrographic analysis results as audio alignment features. The performance of our proposed algorithm is testified by comparing the segmentation results with manually composed reference datasets. Our proposed algorithm achieves robust alignment results and enhanced segmentation accuracy and thus is suitable for audio engineering applications such as automatic music production and human-media interactions.
Convention Paper 8931 (Purchase now)


Thursday, October 17, 10:30 am — 12:30 pm (Room 1E14)

Workshop: W3 - Acoustic Enhancements Systems—Implementations

Ben Kok, SCENA acoustic consultants - Uden, The Netherlands
Steve Barbar, Lares Associates - Belmont, MA, USA
Peter Mapp, Peter Mapp Associates - Colchester, Essex, UK
Thomas Sporer, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Ilmenau University of Technology - Ilmenau, Germany
Takayuki Watanabe, Yamaha Corp. - Hamamatsu, Shizuoka, Japan
Wieslaw Woszczyk, McGill University - Montreal, QC, Canada
Diemer de Vries, RWTH Aachen University - Aachen, Germany; TU Delft - Delft, Netherlands

Acoustic enhancement systems offer the possibility to change the acoustics of a venue by electronic means. How this is achieved varies by the working principle and philosophy of the system implemented. In this workshop various researchers, consultants, and suppliers active in the field of enhancement systems will discuss working principles and implementations.

This workshop is in close relation with the tutorial on acoustic enhancement systems; people not yet too familiar with the applications and working principles of these systems are recommended to attend the tutorial before attending the workshop.

AES Technical Council This session is presented in association with the AES Technical Committee on Acoustics and Sound Reinforcement


Thursday, October 17, 2:30 pm — 4:30 pm (Room 1E12)

Live Sound Seminar: LS2 - Audio Network and Transport

Jim Risgin, On Stage Audio - Wood Dale, IL, USA
Mark Dittmar, Firehouse Productions
Phil Reynolds, System Tech, The Killers
Robert Silfvast, Avid - Mountain View, CA, USA

As audio and control over network become more predominate in today’s live sound environment managing the network becomes more challenging. This panel will discuss the problems, challenges, and solutions required that are associated with sharing the bandwith between audio and control as well as the unique challenges created by all the different manufacturers and protocols. Our discussion will rely heavily upon questions and comments from the audience as your experiences, pitfalls and questions are central to these common challenges today.


Thursday, October 17, 2:30 pm — 4:30 pm (Room 1E07)

Paper Session: P4 - Room Acoustics

Ben Kok, SCENA acoustic consultants - Uden, The Netherlands

P4-1 Investigating Auditory Room Size Perception with Autophonic StimuliManuj Yadav, University of Sydney - Sydney, NSW, Australia; Densil A. Cabrera, University of Sydney - Sydney, NSW, Australia; Luis Miranda, University of Sydney - Sydney, NSW, Australia; William L. Martens, University of Sydney - Sydney, NSW, Australia; Doheon Lee, University of Sydney - Sydney, NSW, Australia; Ralph Collins, University of Sydney - Sydney, NSW, Australia
Although looking at a room gives a visual indicator of its “size,” auditory stimuli alone can also provide an appreciation of room size. This paper investigates such aurally perceived room size by allowing listeners to hear the sound of their own voice in real-time through two modes: natural conduction and auralization. The auralization process involved convolution of the talking-listener’s voice with an oral-binaural room impulse response (OBRIR; some from actual rooms, and others manipulated), which was output through head-worn ear-loudspeakers, and thus augmented natural conduction with simulated room reflections. This method allowed talking-listeners to rate room size without additional information about the rooms. The subjective ratings were analyzed against relevant physical acoustic measures derived from OBRIRs. The results indicate an overall strong effect of reverberation time on the room size judgments, expressed as a power function, although energy measures were also important in some cases.
Convention Paper 8934 (Purchase now)

P4-2 Digitally Steered Columns: Comparison of Different Products by Measurement and SimulationStefan Feistel, AFMG Technologies GmbH - Berlin, Germany; Anselm Goertz, Institut für Akustik und Audiotechnik (IFAA) - Herzogenrath, Germany
Digitally steered loudspeaker columns have become the predominant means to achieve satisfying speech intelligibility in acoustically challenging spaces. This work compares the performance of several commercially available array loudspeakers in a medium-size, reverberant church. Speech intelligibility as well as other acoustic quantities are compared on the basis of extensive measurements and computer simulations. The results show that formally different loudspeaker products provide very similar transmission quality. Also, measurement and modeling results match accurately within the uncertainty limits.
Convention Paper 8935 (Purchase now)

P4-3 A Concentric Compact Spherical Microphone and Loudspeaker Array for Acoustical MeasurementsLuis Miranda, University of Sydney - Sydney, NSW, Australia; Densil A. Cabrera, University of Sydney - Sydney, NSW, Australia; Ken Stewart, University of Sydney - Sydney, NSW, Australia
Several commonly used descriptors of acoustical conditions in auditoria (ISO 3382-1) utilize omnidirectional transducers for their measurements, disregarding the directional properties of the source and the direction of arrival of reflections. This situation is further complicated when the source and the receiver are collocated as would be the case for the acoustical characterization of stages as experienced by musicians. A potential solution to this problem could be a concentric compact microphone and loudspeaker array, capable of synthesizing source and receiver spatial patterns. The construction of a concentric microphone and loudspeaker spherical array is presented in this paper. Such a transducer could be used to analyze the acoustic characteristics of stages for singers, while preserving the directional characteristics of the source, acquiring spatial information of reflections and preserving the spatial relationship between source and receiver. Finally, its theoretical response and optimal frequency range are explored.
Convention Paper 8936 (Purchase now)

P4-4 Adapting Loudspeaker Array Radiation to the Venue Using Numerical Optimization of FIR FiltersStefan Feistel, AFMG Technologies GmbH - Berlin, Germany; Mario Sempf, AFMG Technologies GmbH - Berlin, Germany; Kilian Köhler, IBS Audio - Berlin, Germany; Holger Schmalle, AFMG Technologies GmbH - Berlin, Germany
Over the last two decades loudspeaker arrays have been employed increasingly for sound reinforcement. Their high output power and focusing ability facilitate extensive control capabilities as well as extraordinary performance. Based on acoustic simulation, numerical optimization of the array configuration, particularly of FIR filters, adds a new level of flexibility. Radiation characteristics can be established that are not available for conventionally tuned sound systems. It is shown that substantial improvements in sound field uniformity and output SPL can be achieved. Different real-world case studies are presented based on systematic measurements and simulations. Important practical implementation aspects are discussed such as the spatial resolution of driven sources, the number of FIR coefficients, and the quality of loudspeaker data.
Convention Paper 8937 (Purchase now)


Thursday, October 17, 4:30 pm — 6:00 pm (Room 1E13)

Network Audio: N1 - One Network to Rule Them All

Kevin Gross, AVA Networks - Boulder, CO, USA
Mattias Allevik, Video Corporation of America - New York, NY, USA
Dave Revel, Technical Multimedia Design, Inc. - Burbank, CA, USA

Networked audio distribution is now less frequently accomplished as a separate infrastructure. The promise of running audio on the same network as other facility services and applications is now coming to fruition. This workshop will discuss the motivation for combining services, the challenges in doing so, and requirements this approach puts on audio networking technologies.

AES Technical Council This session is presented in association with the AES Technical Committee on Network Audio Systems


Thursday, October 17, 4:30 pm — 6:30 pm (Room 1E12)

Live Sound Seminar: LS3 - Sound System Optimization

Bob McCarthy, Meyer Sound Labs
Jamie Anderson, Rational Acoustics - Putnam, CT, USA
Josh Evans, TC Group - Austin, TX, USA
John Sandrett
Tom Young
Geoff Zink

Sound system tuning is a multi-step process that begins long before the pink noise can be heard and generally goes until the keyboard is pried from our hands. What are the steps and procedures taken to ensure a successful tuning? How do we convince clients to give us the time and resources to do this vital work? How do we prioritize our limited resources when time is short? (Like always) What can be done ahead of time?

Panel members are all very experienced with the process of system optimization, albeit from a variety of perspectives. Please join us and add your voice to this discussion of system optimization.


Thursday, October 17, 6:00 pm — 7:00 pm (Room 1E13)


Network Audio: N2 - A Primer on Fundamental Concepts of Media Networking

Landon Gentry, Audinate - Portland, OR, USA; Sydney, Australia

This session will cover the OSI model and how data travels through network layers (a “networking stack”): Layers 1, 2, 3 and 4; Cables, MAC Addresses, IP Addresses, and networking protocols. An overview of some networking standards and standards organizations, including the IEEE and the IETF. An introduction to IP data networking . . . it is how everything is already wired together. Identify some of the advantages and limitations of IP data networks with respect to real-time media. A brief discussion of IP networking standards and protocols that can be leveraged for media networking.


Friday, October 18, 9:00 am — 11:00 am (Room 1E12)

Live Sound Seminar: LS4 - Designing for Broadway Theater

Tom Morse, Morse Sound Design - New York, NY, USA
Peter Fitzgerald
Kai Harada, Harada Sound Design - New York, NY, USA
Abe Jacob, David H. Koch Theater - New York, NY, USA
Joshua Reid

We will focus first on a brief history of how sound on Broadway began, then on what makes Broadway Sound Design unique in the audio industry. That will include what is required of a sound designer on Broadway, working with the producer, director, other designers, and what paperwork is needed in order to bid, build, and install the show. A Broadway theater is a four-wall rental meaning the production is placed in an empty shell. All equipment is brought in and installed for what may be a week or could turn into 15 years. This requires a great deal of planning and forethought because the show evolves and can change drastically from first rehearsal (when the paperwork is already over due) to opening night.


Friday, October 18, 9:00 am — 11:30 am (Room 1E09)

Paper Session: P9 - Applications in Audio—Part I

Sungyoung Kim, Rochester Institute of Technology - Rochester, NY, USA

P9-1 Audio Device Representation, Control, and Monitoring Using SNMPAndrew Eales, Wellington Institute of Technology - Wellington, New Zealand; Rhodes University - Grahamstown, South Africa; Richard Foss, Rhodes University - Grahamstown, Eastern Cape, South Africa
The Simple Network Management Protocol (SNMP) is widely used to configure and monitor networked devices. The architecture of complex audio devices can be elegantly represented using SNMP tables. Carefully considered table indexing schemes support a logical device model that can be accessed using standard SNMP commands. This paper examines the use of SNMP tables to represent the architecture of audio devices. A representational scheme that uses table indexes to provide direct-access to context-sensitive SNMP data objects is presented. The monitoring of parameter values and the implementation of connection management using SNMP are also discussed.
Convention Paper 8962 (Purchase now)

P9-2 IP Audio in the Real-World; Pitfalls and Practical Solutions Encountered and Implemented when Rolling Out the Redundant Streaming Approach to IP AudioKevin Campbell, WorldCast Systems /APT - Belfast, N Ireland; Miami, Florida
This paper will review the development of IP audio links for audio delivery and chiefly look at the possibility of harnessing the flexibility and cost-effectiveness of the public internet for professional audio delivery. We will discuss first the benefits of IP audio when measured against traditional synchronous audio delivery and also the typical problems associated with delivering real-time broadcast audio across packetized networks, specifically in the context of unmanaged IP networks. The paper contains an examination of some techniques employed to overcome these issues with an in-depth look at the redundant packet streaming approach.
Convention Paper 8963 (Purchase now)

P9-3 Implementation of AES-64 Connection Management for Ethernet Audio/Video Bridging DevicesJames Dibley, Rhodes University - Grahamstown, South Africa; Richard Foss, Rhodes University - Grahamstown, Eastern Cape, South Africa
AES-64 is a standard for the discovery, enumeration, connection management, and control of multimedia network devices. This paper describes the implementation of an AES-64 protocol stack and control application on devices that support the IEEE Ethernet Audio/Video Bridging standards for streaming multimedia, enabling connection management of network audio streams.
Convention Paper 8964 (Purchase now)

P9-4 Simultaneous Acquisition of a Massive Number of Audio Channels through Optical MeansGabriel Pablo Nava, NTT Communication Science Laboratories - Kanagawa, Japan; Yutaka Kamamoto, NTT Communication Science Laboratories - Kanagawa, Japan; Takashi G. Sato, NTT Communication Science Laboratories - Kanagawa, Japan; Yoshifumi Shiraki, NTT Communication Science Laboratories - Kanagawa, Japan; Noboru Harada, NTT Communicatin Science Labs - Atsugi-shi, Kanagawa-ken, Japan; Takehiro Moriya, NTT Communicatin Science Labs - Atsugi-shi, Kanagawa-ken, Japan
Sensing sound fields at multiple locations often may become considerably time consuming and expensive when large wired sensor arrays are involved. Although several techniques have been developed to reduce the number of necessary sensors, less work has been reported on efficient techniques to acquire the data from all the sensors. This paper introduces an optical system, based on the concept of visible light communication, which allows the simultaneous acquisition of audio signals from a massive number of channels via arrays of light emitting diodes (LEDs) and a high speed camera. Similar approaches use LEDs to express the sound pressure of steady state fields as a scaled luminous intensity. The proposed sensor units, in contrast, transmit optically the actual digital audio signal sampled by the microphone in real time. Experiments to illustrate two examples of typical applications are presented: a remote acoustic imaging sensor array and a spot beamforming based on the compressive sampling theory. Implementation issues are also addressed to discuss the potential scalability of the system.
Convention Paper 8965 (Purchase now)

P9-5 Blind Microphone Analysis and Stable Tone Phase Analysis for Audio Tampering DetectionLuca Cuccovillo, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Sebastian Mann, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Patrick Aichroth, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Marco Tagliasacchi, Politecnico di Milano - Milan, Italy; Christian Dittmar, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany
In this paper we present an audio tampering detection method based on the combination of blind microphone analysis and phase analysis of stable tones, e.g., the electrical network frequency (ENF). The proposed algorithm uses phase analysis to detect segments that might have been tampered. Afterwards, the segments are further analyzed using a feature vector able to discriminate among different microphone types. Using this combined approach, it is possible to achieve a significantly lower false-positive rate and higher reliability as compared to standalone phase analysis.
Convention Paper 8966 (Purchase now)


Friday, October 18, 11:00 am — 1:00 pm (Room 1E12)

Live Sound Seminar: LS5 - Dealing with Noise Pollution in Theaters

Tom Clark, Acme Design
Damian Doria
Scott Lehrer, Scott Lehrer Sound Design, Ltd. - New York, NY, USA
Tom Morse, Morse Sound Design - New York, NY, USA

As video projection, moving lights, and automated scenery have become common in Broadway productions, the noise they each create has become a problem for the sound design team to overcome without losing the subtle use of sound reinforcement and sound effects. This is a discussion among designers about ways to deal with this increasing problem.


Friday, October 18, 2:15 pm — 4:45 pm (Room 1E09)

Paper Session: P11 - Perception—Part 1

Jason Corey, University of Michigan - Ann Arbor, MI, USA

P11-1 On the Perceptual Advantage of Stereo Subwoofer Systems in Live Sound ReinforcementAdam J. Hill, University of Derby - Derby, Derbyshire, UK; Malcolm O. J. Hawksford, University of Essex - Colchester, Essex, UK
Recent research into low-frequency sound-source localization confirms the lowest localizable frequency is a function of room dimensions, source/listener location, and reverberant characteristics of the space. Larger spaces therefore facilitate accurate low-frequency localization and should gain benefit from broadband multichannel live-sound reproduction compared to the current trend of deriving an auxiliary mono signal for the subwoofers. This study explores whether the monophonic approach is a significant limit to perceptual quality and if stereo subwoofer systems can create a superior soundscape. The investigation combines binaural measurements and a series of listening tests to compare mono and stereo subwoofer systems when used within a typical left/right configuration.
Convention Paper 8970 (Purchase now)

P11-2 Auditory Adaptation to Loudspeakers and Listening Room AcousticsCleopatra Pike, University of Surrey - Guildford, Surrey, UK; Tim Brookes, University of Surrey - Guildford, Surrey, UK; Russell Mason, University of Surrey - Guildford, Surrey, UK
Timbral qualities of loudspeakers and rooms are often compared in listening tests involving short listening periods. Outside the laboratory, listening occurs over a longer time course. In a study by Olive et al. (1995) smaller timbral differences between loudspeakers and between rooms were reported when comparisons were made over shorter versus longer time periods. This is a form of timbral adaptation, a decrease in sensitivity to timbre over time. The current study confirms this adaptation and establishes that it is not due to response bias but may be due to timbral memory, specific mechanisms compensating for transmission channel acoustics, or attentional factors. Modifications to listening tests may be required where tests need to be representative of listening outside of the laboratory.
Convention Paper 8971 (Purchase now)

P11-3 Perception Testing: Spatial AcuityP. Nigel Brown, Ex'pression College for Digital Arts - Emeryville, CA, USA
There is a lack of readily accessible data in the public domain detailing individual spatial aural acuity. Introducing new tests of aural perception, this document specifies testing methodologies and apparatus, with example test results and analyses. Tests are presented to measure the resolution of a subject's perception and their ability to localize a sound source. The basic tests are designed to measure minimum discernible change across a 180° horizontal soundfield. More complex tests are conducted over two or three axes for pantophonic or periphonic analysis. Example results are shown from tests including unilateral and bilateral hearing aid users and profoundly monaural subjects. Examples are provided of the applicability of the findings to sound art, healthcare, and other disciplines.
Convention Paper 8972 (Purchase now)

P11-4 Evaluation of Loudness Meters Using Parameterization of Fader MovementsJon Allan, Luleå University of Technology - Piteå, Sweden; Jan Berg, Luleå University of Technology - Piteå, Sweden
The EBU recommendation R 128 regarding loudness normalization is now generally accepted and countries in Europe are adopting the new recommendation. There is now a need to know more about how and when to use the different meter modes, Momentary and Short term, proposed in R 128, as well as to understand how different implementations of R 128 in audio level meters affect the engineers’ actions. A method is tentatively proposed for evaluating the performance of audio level meters in live broadcasts. The method was used to evaluate different meter implementations, three of them conforming to the recommendation from EBU, R 128. In an experiment, engineers adjusted audio levels in a simulated live broadcast show and the resulting fader movements were recorded. The movements were parameterized into “Fader movement,” “Adjustment time,” “Overshoot,” etc. Results show that the proposed parameters produced significant differences caused by the meters and that the experience of the engineer operating the fader is a significant factor.
Convention Paper 8973 (Purchase now)

P11-5 Validation of the Binaural Room Scanning Method for Cinema Audio ResearchLinda A. Gedemer, University of Salford - Salford, UK; Harman International - Northridge, CA, USA; Todd Welti, Harman International - Northridge, CA, USA
Binaural Room Scanning (BRS) is a method of capturing a binaural representation of a room using a dummy head with binaural microphones in the ears and later reproducing it over a pair of calibrated headphones. In this method multiple measurements are made at differing head angles that are stored separately as data files. A playback system employing headphones and a headtracker recreates the original environment for the listener, so that as they turn their head, the rendered audio during playback matches the listeners' current head angle. This paper reports the results of a validation test of a custom BRS system that was developed for research and evaluation of different loudspeakers and different listening spaces. To validate the performance of the BRS system, listening evaluations of different in-room equalizations of a 5.1 loudspeaker system were made both in situ and via the BRS system. This was repeated using three different loudspeaker systems in three different sized listening rooms.
Convention Paper 8974 (Purchase now)


Friday, October 18, 2:30 pm — 3:30 pm (Room 1E13)

Network Audio: N3 - The Role of Standards in Audio Networking

Mark Yonge, Blakeney, Gloucestershire, UK
Jeff Berryman, Bosch Communications - Ithaca, NY, USA
Kevin Gross, AVA Networks - Boulder, CO, USA
Andreas Hildebrand, ALC NetworX - Munich, Germany
Lee Minich, Lab X Technologies - Rochester, NY, USA

A number of standards organizations and industry associations have been active in promoting standards relating to audio networks, such as EBU, IEC, and not least AES with recent standards AES64, AES67, and project X-210. Networks themselves are standardized under the auspices of bodies such as the IEEE and IETF. This session will describe the landscape of standards bodies and their areas of interest in audio networking and will examine the questions:

• Are standards important?
• How does all this standard activity impact the real world of audio networks?
• How do these standards benefit the marketplace, end users and the technology suppliers to this market?
• Is development of and adherence to standards better for suppliers and end users than letting the manufacturers’ proprietary solutions compete for market dominance?


Friday, October 18, 2:30 pm — 4:30 pm (Room 1E12)

Live Sound Seminar: LS6 - Wireless Microphones and Performers: Mic Placement and Handling for Multiple Actors

Mary McGregor, Freelance, Local 1 - New York, NY, USA
Stephanie Vetter, Freelance, Local 1 - New York, NY, USA

Fitting actors with wireless microphone elements and transmitters has become a detailed art form. From ensuring the actor is comfortable and the electronics are safe and secure, to getting the proper sound with minimal detrimental audio effects all while maintaining the visual illusion, one of the most widely recognized artisans in this field provide hands on demonstrations of basic technique along with some time tested “tricks of the trade.”


Friday, October 18, 3:30 pm — 5:00 pm (Room 1E13)

Network Audio: N4 - Command and Control Protocols, Target Application Use Cases

Tim Shuttleworth, Renkus Heinz - Oceanside, CA, USA
Jeff Berryman, Bosch Communications - Ithaca, NY, USA
Andrew Eales, Wellington Institute of Technology - Wellington, New Zealand; Rhodes University - Grahamstown, South Africa
Richard Foss, Rhodes University - Grahamstown, Eastern Cape, South Africa
Jeff Koftinoff, Meyer Sound Canada - Vernon, BC, Canada

With the increasing utilization of data networks for the command and control of audio devices a number of protocols have been defined and promoted. These competing protocol initiatives, while providing methods suited to their target applications, have created confusion among potential adopters as to which protocol best fits their needs. In addition, the question is being asked, Why do we need so many “standard” protocols? At least four different industry organizations have involved themselves in some form of standardized protocol effort. AES is currently pursuing standardization of two such protocols, AES64 and X-210 (aka OCA). IEC has IEC62379, while IEEE is defining AVDECC (IEE1722.1) and ESTA offers ACN and there’s OSC from This workshop addresses what differentiates these protocols by examining their target use applications.

AES Technical Council This session is presented in association with the AES Technical Committee on Network Audio Systems


Friday, October 18, 4:30 pm — 6:30 pm (Room 1E12)


Live Sound Seminar: LS7 - Design for Houses of Worship and Installed Sound

Bill Thrasher, Sr., Thrasher Design Group, Inc. - Kennesaw, GA, USA

One of the professional audio industry's largest and persistently expanding markets, the House of Worship sector has matured into a highly sophisticated, demanding, and incredibly diverse collection. This panel will discuss issues ranging from budget to design & install, from service & support to operational training, from the perspectives of management, manufacturers, consultants/designers, contractors, vendors, users, operators, and the listeners.


Saturday, October 19, 9:00 am — 11:00 am (Room 1E12)

Live Sound Seminar: LS8 - Design Meets Reality: The A2’s and Production Sound Mixer’s Challenges, Obstacles, and Responsibilities for Loading in and Implementing the Sound Designer’s Concept

Christopher Evans, Benedum Center - Pittsburgh, PA, USA
Colle Bustin, IRES-Partners, LLC - New York, NY, USA
Paul Garrity, Auerbach Pollock Friedlander - New York, NY, USA; Auerbach Pollock Friedlander - San Francisco, CA, USA
Scott Lehrer, Scott Lehrer Sound Design, Ltd. - New York, NY, USA
Augie Propersi, NYC City Center
Dominic Sack, Sound Associates, Inc.
Christopher Sloan, Production Engineer, The Book of Mormon

The best intentions of the sound designer don’t always fit in with the venue’s interior or infrastructure, other departments’ needs, or other changes as a production is loaded in and set up for the first time. How the designer’s designated representative on site addresses these issues is critical to keeping the overall vision of the sound design and production aesthetics intact while keeping an eye on the budget and schedule.


Saturday, October 19, 11:00 am — 12:30 pm (Room 1E12)

Live Sound Seminar: LS9 - Assuring High Quality Speech Intelligibility for Sports Events In Stadiums

Renato Cipriano, Walters Storyk Design Group Brazil
Sergio Molho, Walters Storyk Design Group - Highland, NY, USA
John Storyk, Walters-Storyk Design Group - Highland, NY, USA

Establishing high quality speech intelligibility for sports events in stadiums requires a somewhat different mindset than that required for an optimum concert sound. However, stadiums frequently host both these types of events, and systems must be adaptable to both. Two key issues to consider are Speech Intelligibility (more critical than the frequency response of the system for sports events) and Uniform Sound Coverage, which is critical to meeting FIFA rules and regulations. For the past two+ years, WSDG Brazil has been working on three major stadium projects simultaneously, in preparation for the 2016 Olympics. A number of venerable older Brazilian arenas are currently undergoing substantial upgrades. Work on the Mineirão Stadium (1965) in Belo Horizonte was completed last year. It has already hosted Paul McCartney and will host the 2014 World Cup. Renovations on Independencia (1950), Brazil’s largest stadium, also in Belo Horizonte are approaching completion. Renovations on the Maracanã Stadium (1950) in Rio de Janeiro will be completed later this year. WSDG Brazil is tasked with designing the acoustics and the complete audio and video systems for all three stadiums. This presentation will cover: Analysis of Requirements Involved in The Design Process of Stadium Sound Systems; including frequency response, target STL and STI values, coverage (SPL distribution) Zoning, Architectural and Structural Integration, redundancy, etc. Also to be covered are: Overall Electroacoustical Simulations and Auralization for Large Arenas. Additionally, the presentation will address: Sound System Design For Security (Evacuation/Public Address Announcements) including Broadcast Requirements and, Zoning/Distribution Issues Which Require Special Attention And Drive Design Decisions.


Saturday, October 19, 11:15 am — 12:45 pm (Room 1E14)


Tutorial: T13 - A Holistic Approach to Crossover Systems and Equalization for Loudspeakers (A Master Class Event)

Malcolm O. J. Hawksford, University of Essex - Colchester, Essex, UK

Loudspeaker systems employ crossover filters and equalization to optimize their performance in the presence of electroacoustic transducers limitations and associated loudspeaker enclosures. This Master Class will discuss both analog and digital techniques and include examples of constant-voltage, all-pass, and constant-delay crossover alignments together with the constraints imposed by the choice of signal processing. The meaning of “minimum-phase” will be described including its linkage to causality and digital equalization strategies presented that emphasize the importance of loudspeaker impulse response decomposition into minimum-phase and excess-phase transfer functions. The session will include demonstrations on minimum-phase response derivation from a magnitude-frequency response and on the audibility of pure phase distortion to justify the use of the Linkwitz-Riley 4th-order class of analog crossover alignment.


Saturday, October 19, 2:30 pm — 4:30 pm (Room 1E12)

Live Sound Seminar: LS10 - Production Wireless Systems: An Examination of Antennas, Coax, Filters, and Other Tips and Tricks from the Experts

James Stoffo, Radio Active Designs - Key West, FL, USA
Brooks Schroeder, Frequency Coordination Group - Orlando, FL, USA
Vinnie Siniscal, Firehouse Productions - Red Hook, NY, USA
Ed Weizcerak, Freelance

Beyond the basics of accepted RF practices for wireless microphones, intercoms, IEMs, and IFBs is a plethora of facts about antennas, coax, and other passives not commonly understood by the production community at large. This session is comprised of an expert group of RF practitioners who will discuss the various types and performance characteristics of antennas, coax, filters, isolators/circulators, hybrid combiners, directional couplers, and other devices along with their own tips and tricks for dealing with difficult deployments.


Saturday, October 19, 3:00 pm — 4:00 pm (Stage)


Project Studio Expo: Loudness, Levels, and Metering

Hugh Robjohns, Technical Editor, Sound on Sound - Cambridge, UK

This seminar will cover the development and history of audio metering and discuss why traditional analog instruments are obsolete in the digital age. It will then cover digital metering and the associated problems, and contrast the concepts and practices of peak and loudness normalization. That will lead on to the aims of the ITU-R BS1770 loudness standard, its practical implementation, and then examples of how it has been implemented by a number of manufacturers and how it works in practice. There will be audio/visual examples throughout.


Saturday, October 19, 4:30 pm — 7:00 pm (Room 1E12)

Live Sound Seminar: LS11 - TVBDs, Geo-Location Databases, and Upcoming Spectrum Auctions: An In-Depth Look and Their Impact on Wireless Microphone Operations

Henry Cohen, CP Communications
Joe Ciaudelli, Sennheiser Electronic Corporation - Old Lyme, CT, USA
Ira Keltz, Federal Communications Commission
Michael Marcus, Marcus Spectrum Solutions - Cabin John, MD, USA
David Pawlik, Skadden, Arps, Slate, Meagher & Flom - Washington, DC, USA
Edgar Reihl, Shure, Incorporated - Niles, IL, USA
Peter Stanforth, Specrum Bridge
James Stoffo, Radio Active Designs - Key West, FL, USA

Television band devices (TVBD) and geo-location databases directing TVBD operations are a reality, and the first certified fixed TVBDs are in service. The 600 MHz auction may likely occur in 2014 with a vacate date within the next six to eight years. Operating wireless microphones, IEMs, intercoms, and cueing in this new environment requires understanding how the databases work, the rules governing both licensed and unlicensed wireless production equipment, and what spectrum is currently available and will be available in the future. This panel brings together a diverse group of individuals intimately involved from the beginning with TVBDs, databases, spectrum auctions, and the new FCC rules as well as seasoned veterans of medium- to large-scale wireless microphone deployments to discuss how the databases operate, how to use the database for registering TV channel usage, and best procedures and practices to insure minimal problems.


Sunday, October 20, 11:00 am — 12:30 pm (Room 1E08)

Network Audio: N5 - X192 / AES67: How the New Networked Audio Interoperability Standard Was Designed

Greg Shay, The Telos Alliance - Cleveland, OH, USA
Kevin Gross, AVA Networks - Boulder, CO, USA
Stefan Heinzmann, Heinzmann - Konstanz, Germany
Andreas Hildebrand, ALC NetworX - Munich, Germany
Gints Linis, University of Latvia - IMCS - Riga, Latvia

It is said, to really understand a solution, you must clearly understand the problems it is solving. The nature of a technical specification like AES67 is that it is the end result of much discussion and deliberation. However, many of the intentions, the tradeoffs that were made, and an understanding of what problems were being solved, are not fully contained in the resulting document.

This panel will present the background of a number of the
decisions that were made and embodied into AES67. It will
describe the problems that were targeted to be solved, as best as they were understood. What were some of the difficult tradeoffs?

Networked audio will be new for some users, while some of the roots of the networked audio experience of the members of X192 go back 20 years. Given a proverbial clean slate by the AES, come listen to the reasons why the choices in AES67 were made.


Sunday, October 20, 11:00 am — 1:00 pm (Room 1E12)

Live Sound Seminar: LS12 - An Interview with Dave Natale

Keith Clark, ProSound Web
Dave Natale, Audio Resource Group, Inc. - Lancaster, PA, USA

Dave Natale is a veteran of over 30 years mixing front of house for the biggest names in concert touring, having spent much of that time working for Clair Brothers Audio (now Clair Global). Dave will discuss his career, knowledge gained along the way, and what all FOH mixers should know and strive for. A Q&A will follow the interview. Keith Clark is the Editor of ProSoundWeb and has been involved in the pro audio publishing field for more than 20 years.


Sunday, October 20, 1:00 pm — 3:00 pm (Room 1E08)

Workshop: W29 - Miking for PA

Eddy B. Brixen, EBB-consult/DPA Microphones - Smorum, Denmark
Giacomo De Caterini, Casale Bauer - Rome, Italy; Accademia di Santa Cecilia
Henrik Kjelin, Complete Vocal Institute - Copenhagen, Denmark
Cathrine Sadolin, Complete Vocal Institute - Copenhagen, Denmark
Nevin Steinberg, Nevin Steinberg Sound Design - New York, NY, USA

Miking for PA is a very important task. Providing amplification to the spoken voice or the acoustical music instrument requires good knowledge about the sound source, about the PA-system, about the monitoring system—and about the microphones. This workshop takes you through some of the important issues and decisions when selecting the microphone with regards to peak level capacity, sensitivity, directivity, frequency response, sensitivity, etc. Getting balance, getting definition, getting the right timbre or “sound”—and still avoiding acoustical feedback, that’s the thing. Recognized engineers and sound designers will generously share their experiences from their work on the stages. Warning: Some of the attendees may pick up ideas that will change their habits forever…

AES Technical Council This session is presented in association with the AES Technical Committee on Microphones and Applications


Sunday, October 20, 2:00 pm — 4:00 pm (Room 1E07)

Paper Session: P18 - Perception—Part 2

Agnieszka Roginska, New York University - New York, NY, USA

P18-1 Negative Formant Space, “O Superman,” and MeaningS. Alexander Reed, Ithaca College - New York, NY, USA
This in-progress exploration considers both some relationships between sounding and silent formants in music and the compositional idea of spectral aggregates. Using poststructuralist lenses and also interpretive spectrographic techniques informed by music theorist Robert Cogan, it offers a reading of Laurie Anderson’s 1982 hit “O Superman” that connects the aforementioned concerns of timbre with interpretive processes of musical meaning. In doing so, it contributes to the expanding musicological considerations of timbre beyond its physical, psychoacoustic, and orchestrational aspects.
Convention Paper 9014 (Purchase now)

P18-2 The Effects of Interaural Level Differences Caused by Interference between Lead and Lag on Summing LocalizationM. Torben Pastore, Rensselaer Polytechnic Institute - Troy, NY, USA; Jonas Braasch, Rensselaer Polytechnic Institute - Troy, NY, USA
Traditionally, the perception of an auditory event in the summing localization range is shown as a linear progression from a location between a coherent lead and lag to the lead location as the delay between them increases from 0-ms to approximately 1-ms. This experiment tested the effects of interference between temporally overlapping lead and lag stimuli on summing localization. We found that the perceived lateralization of the auditory event oscillates with the period of the center frequency of the stimulus, unlike what the traditional linear model would predict. Analysis shows that this is caused by interaural level differences due to interference between a coherent lead and lag.
Convention Paper 9015 (Purchase now)

P18-3 Paired Comparison as a Method for Measuring EmotionsJudith Liebetrau, Ilmenau University of Technology - Ilmenau, Germany; Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Johannes Nowak, Ilmenau University of Technology - Ilmenau, Germany; Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Thomas Sporer, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Ilmenau University of Technology - Ilmenau, Germany; Matthias Krause, Ilmenau University of Technology - Ilmenau, Germany; Martin Rekitt, Ilmenau University of Technology - Ilmenau, Germany; Sebastian Schneider, Ilmenau University of Technology - Ilmenau, Germany
Due to the growing complexity and functionality of multimedia systems, quality evaluation becomes a cross-disciplinary task, taking technology-centric assessment, as well as human factors into account. Undoubtedly, emotions induced during perception, have a reasonably high influence on the experienced quality. Therefore the assessment of users’ affective state is of great interest for development and improvement of multimedia systems. In this work problems of common assessment methods as well as newly applied methods in emotional research will be displayed. Direct comparison of stimuli as a method intended for faster and easier assessment of emotions is investigated and compared to previous work. The results of the investigation showed that paired comparison seems inadequate to assess multidimensional items/problems, which often occur in multi-media applications.
Convention Paper 9016 (Purchase now)

P18-4 Media Content Emphasis Using Audio Effect Contrasts: Building Quantitative Models from Subjective EvaluationsXuchen Yang, University of Rochester - Rochester, NY, USA; Zhe Wen, University of Rochester - Rochester, NY, USA; Gang Ren, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
In this paper we study media content emphasis patterns of audio effects and construct their quantitative models using subjective evaluation experiments. The media content emphasis patterns are produced by contrasts between effect-sections and non-effect sections, which change the focus of audience attention. We investigate media emphasis patterns of typical audio effects including equalization, reverberation, dynamic range control, and chorus. We compile audio test samples by applying different settings of audio effects and their permutations. Then we construct quantitative models based on the audience rating of the “subjective significance” of test audio segments. Statistical experiment design and analysis techniques are employed to establish the statistical significance of our proposed models.
Convention Paper 9017 (Purchase now)


Sunday, October 20, 2:30 pm — 4:30 pm (Room 1E12)

Live Sound Seminar: LS13 - Audio for Corporate Presentations

Michael (Bink) Knowles, Freelance Engineer - Oakland, CA, USA
Paul Bevan
Bruce Cameron, House to Half Inc. - Carmel, NY, USA
Lee Kalish, Positive Feedback llc - Kingston, NY, USA

Sound for corporate events can be lucrative but it can also be very demanding. Complex matrixing or other unusual solutions may be required in signal routing to loudspeaker zones, recording devices, distant participants and web streaming. Amplifying lavalier mics strongly into a loudspeaker system is its own art. Client relations are of top importance. We will talk about how these factors shape our differing approaches to corporate sound systems. Audience questions are encouraged.


Sunday, October 20, 2:30 pm — 4:00 pm (Room 1E13)

Workshop: W31 - Beam Steering Loudspeakers and Line Arrays

Peter Mapp, Peter Mapp Associates - Colchester, Essex, UK
Stefan Feistel, AFMG Technologies GmbH - Berlin, Germany
Ralph Heinz, Renkus-Heinz, Inc. - Foothill Ranch, CA, USA
Philippe Robineau, Tannoy - Coatbridge, Scotland, UK
Evert Start, Duran Audio - Zaltbommel, Netherlands
Ambrose Thompson, Martin Audio - High Wycombe, UK

Beam Steered Line Arrays have been commercially available for more than 15 years. Although originally intended for and restricted to speech applications, in the last few years, full range music systems have also started to enter the market. This tutorial will discuss the technology behind the systems, their application, and potential limitations. The panel members all have a wide experience of the steered arrays and so are able to cover all aspects of their design and application. The workshop will include a number of case histories and aims to get anyone not familiar with the technology up to speed as well as providing experienced users with some answers to long standing questions.

AES Technical Council This session is presented in association with the AES Technical Committee on Acoustics and Sound Reinforcement


Return to Live Sound Track Events

EXHIBITION HOURS October 18th 10am – 6pm October 19th 10am – 6pm October 20th 10am – 4pm
REGISTRATION DESK October 16th 3pm – 7pm October 17th 8am – 6pm October 18th 8am – 6pm October 19th 8am – 6pm October 20th 8am – 4pm
TECHNICAL PROGRAM October 17th 9am – 7pm October 18th 9am – 7pm October 19th 9am – 7pm October 20th 9am – 6pm
AES - Audio Engineering Society