AES New York 2015
Live Sound Track Event Details
Thursday, October 29, 9:00 am — 10:00 am (Room 1A14)
Workshop: W1 - Hearing Smart
Chair:Kathy Peck, ED and Co-founder H.E.A.R. - San Francisco, CA, USA; co presenter-Center for the Artist at NY-Presbyterian Weill Cornell Medical Center - New York, NY, USA
Moderator:
Dan Beck, Trustee of The Music Performance Trust Fund - New York, NY, USA
Panelists:
Richard Einhorn, Einhorn Consulting, LLC - New York, NY, USA; Richard Einhorn Productions, Inc.
Marty Garcia, Future Sonics - Bristol, PA, USA
S. Benjamin Kanters, Columbia College - Chicago, IL, USA; Hear Tomorrow
Joseph Montano, Chief of Audiology and Speech Language Pathology at New York Presbyterian Hospital-Weill Cornell Medical Center - New York, NY, USA
Abstract:
(2015) WHO—World Health Organization—sites 1.1 billion people at risk of hearing loss a serious threat posed by exposure to recreational noise due to the unsafe use of personal audio devices and exposure to damaging levels of sound at noisy entertainment venues such as nightclubs, bars, and sporting events. (1989) Nonprofit H.E.A.R, Hearing Education and Awareness for Rockers an early proponent with founding support of Pete Townshend of the Who launched worldwide grassroots initiatives. In efforts to unite the music industry and the hearing health/medical community, raise awareness and improve hearing conservation for performers, personnel, and consumers of music to insure its continued creation, performance, and enjoyment a panel of music and medical industry hearing conservation experts were brought together to discuss hearing education and prevention.
Thursday, October 29, 9:00 am — 12:30 pm (Room 1A08)
Paper Session: P1 - Signal Processing
Chair:
Scott Norcross, Dolby Laboratories - San Francisco, CA, USA
P1-1 Time-Frequency Analysis of Loudspeaker Sound Power Impulse Response—Pascal Brunet, Samsung Research America - Valencia, CA USA; Audio Group - Digital Media Solutions; Allan Devantier, Samsung Research America - Valencia, CA, USA; Adrian Celestinos, Samsung Research America - Valencia, CA, USA
In normal conditions (e.g., a living room) the total sound power emitted by the loudspeaker plays an important role in the listening experience. Along with the direct sound and first reflections, the sound power defines the loudspeaker performance in the room. The acoustic resonances of the loudspeaker system are especially important, and thanks to spatial averaging, are more easily revealed in the sound power response. In this paper we use time-frequency analysis to study the spatially averaged impulse response and reveal the structure of its resonances. We also show that the net effect of loudspeaker equalization is not only the attenuation of the resonances but also the shortening of their duration.
Convention Paper 9354 (Purchase now)
P1-2 Low-Delay Transform Coding Using the MPEG-H 3D Audio Codec—Christian R. Helmrich, International Audio Laboratories - Erlangen, Germany; Michael Fischer, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Recently the ISO/IEC MPEG-H 3D Audio standard for perceptual coding of one or more audio channels has been finalized. It is a little-known fact that, particularly for communication applications, the 3D Audio core-codec can be operated in a low-latency configuration in order to reduce the algorithmic coding/decoding delay to 44, 33, 24, or 18 ms at a sampling rate of 48 kHz. This paper introduces the essential coding tools required for high-quality low-delay coding–transform splitting, intelligent gap filling, and stereo filling–and demonstrates by means of blind listening tests that the achievable subjective performance compares favorably with, e.g., that of HE-AAC even at low bit-rates.
Convention Paper 9355 (Purchase now)
P1-3 Dialog Control and Enhancement in Object-Based Audio Systems—Jean-Marc Jot, DTS, Inc. - Los Gatos, CA, USA; Brandon Smith, DTS, Inc. - Bellevue, WA, USA; Jeff Thompson, DTS, Inc. - Bellevue, WA, USA
Dialog is often considered the most important audio element in a movie or television program. The potential for artifact-free dialog salience personalization is one of the advantages of new object-based multichannel digital audio formats, along with the ability to ensure that dialog remains comfortably audible in the presence of concurrent sound effects or music. In this paper we review some of the challenges and requirements of dialog control and enhancement methods in consumer audio systems, and their implications in the specification of object-based digital audio formats. We propose a solution incorporating audio object loudness metadata, including a simple and intuitive consumer personalization interface and a practical head-end encoder extension.
Convention Paper 9356 (Purchase now)
P1-4 Frequency-Domain Parametric Coding of Wideband Speech–A First Validation Model—Aníbal Ferreira, University of Porto - Porto, Portugal; Deepen Sinha, ATC Labs - Newark, NJ, USA
Narrow band parametric speech coding and wideband audio coding represent opposite coding paradigms involving audible information, namely in terms of the specificity of the audio material, target bit rates, audio quality, and application scenarios. In this paper we explore a new avenue addressing parametric coding of wideband speech using the potential and accuracy provided by frequency-domain signal analysis and modeling techniques that typically belong to the realm of high-quality audio coding. A first analysis-synthesis validation framework is described that illustrates the decomposition, parametric representation, and synthesis of perceptually and linguistically relevant speech components while preserving naturalness and speaker specific information.
Convention Paper 9357 (Purchase now)
P1-5 Proportional Parametric Equalizers—Application to Digital Reverberation and Environmental Audio Processing—Jean-Marc Jot, DTS, Inc. - Los Gatos, CA, USA
Single-band shelving or presence boost/cut filters are useful building blocks for a wide range of audio signal processing functions. Digital filter coefficient formulas for elementary first- or second-order IIR parametric equalizers are reviewed and discussed. A simple modification of the classic Regalia-Mitra design yields efficient solutions for tunable digital equalizers whose dB magnitude frequency response is proportional to the value of their gain control parameter. Practical applications to the design of tone correctors, artificial reverberators and environmental audio signal processors are described.
Convention Paper 9358 (Purchase now)
P1-6 Comparison of Parallel Computing Approaches of a Finite-Difference Implementation of the Acoustic Diffusion Equation Model—Juan M. Navarro, UCAM - Universidad Católica San Antonio - Guadalupe (Murcia), Spain; Baldomero Imbernón, UCAM Catholic University of San Antonio - Murcia, Spain; José J. López, Universitat Politcnica de Valencia - Valencia, Spain; José M. Cecilia, UCAM Catholic University of San Antonio - Murcia, Spain
The diffusion equation model has been intensively researched as a room-acoustics simulation algorithm during last years. A 3-D finite-difference implementation of this model was proposed to evaluate the propagation over time of sound field within rooms. Despite the computational saving of this model to calculate the room energy impulse response, elapsed times are still long when high spatial resolutions and/or simulations in several frequency bands are needed. In this work several data-parallel approaches of this finite-difference solution on Graphics Processing Units are proposed using a compute unified device architecture programming model. A comparison of their performance running on different models of Nvidia GPUs is carried out. In general, 2D vertical block approach running in a Tesla K20C shows the best speed-up of more than 15 times versus CPU version.
Convention Paper 9359 (Purchase now)
P1-7 An Improved and Generalized Diode Clipper Model for Wave Digital Filters—Kurt James Werner, Center for Computer Research in Music and Acoustics (CCRMA) - Stanford, CA, USA; Stanford University; Vaibhav Nangia, Stanford University - Stanford, CA, USA; Alberto Bernardini, Politecnico di Milano - Milan, Italy; Julius O. Smith, III, Stanford University - Stanford, CA, USA; Augusto Sarti, Politecnico di Milano - Milan, Italy
We derive a novel explicit wave-domain model for “diode clipper" circuits with an arbitrary number of diodes in each orientation, applicable, e.g., to wave digital filter emulation of guitar distortion pedals. Improving upon and generalizing the model of Paiva et al. (2012), which approximates reverse-biased diodes as open circuits, we derive a model with an approximated correction term using two Lambert W functions. We study the energetic properties of each model and clarify aspects of the original derivation. We demonstrate the model's validity by comparing a modded Tube Screamer clipping stage emulation to SPICE simulation.
Convention Paper 9360 (Purchase now)
Thursday, October 29, 9:00 am — 11:00 am (Room 1A10)
Tutorial: T2 - Microphones—Can You Hear the Specs?
Chair:Eddy B. Brixen, EBB-consult - Smørum, Denmark; DPA Microphones
Panelists:
Jürgen Breitlow, Georg Neumann Berlin - Berlin, Germany; Sennheiser Electronic - Wedemark, Germany
David Josephson, Josephson Engineering, Inc. - Santa Cruz, CA, USA
Helmut Wittek, SCHOEPS GmbH - Karlsruhe, Germany
Abstract:
There are lots and lots of microphones available to the audio engineer. The final choice is often made on the basis of experience or perhaps just habits. (Sometimes the mic is chosen because of the Looks...). Nevertheless, there is valuable information in the microphone specifications. This tutorial demystify the most important microphone specs and provide the attendee with up-to-date information on how these specs are obtained and understood and how the numbers relate to the perceived sound. It takes a critical look on how specs are presented to the user, what to look for, and what to expect.
Thursday, October 29, 10:45 am — 12:15 pm (Room 1A13)
Product Development: PD2 - Practical Loudspeaker Processing for the Practicing Engineer
Presenter:Paul Beckmann, DSP Concepts, LLC - Sunnyvale, CA, USA
Abstract:
Loudspeaker signal processing is making the transition from traditional analog designs to digital processing. This is being driven by the availability of digital content, the desire to have wireless products, and the promise of improved sound through digital signal processing. We cover the main concepts behind digital audio processing for loudspeakers. We use a hands-on approach and interactively build up the signal chain using graphical tools. We discuss crossovers, equalizers, limiters, and perceptual loudness controls. Key concepts are reinforced through examples and real-time demos. The session is aimed at the practicing audio engineer and we go easy on math and theory. Instead of writing code we leverage modern design tools and you will leave ready to design your own processing chain.
Thursday, October 29, 11:00 am — 12:45 pm (Room 1A12)
Live Sound Seminar: LS1 - AC Power and Grounding
Chair:Mike Sokol, NoShockZone.org; Shenandoah University
Panelists:
Steve Lampen, Belden - San Francisco, CA, USA
Bill Sacks, Orban / Optimod Refurbishing - Hollywood, MD, USA
Abstract:
Much misinformation remains about what is needed for AC power for events—much of it potentially life-threatening advice. This panel will discuss how to provide AC power properly and safely and without causing noise problems. The session will cover power for small to large systems, from a couple boxes on sticks up to multiple stages in ballrooms, road houses, and event centers; large scale installed systems, including multiple transformers and company switches, service types, generator sets, 1ph, 3ph, 240/120 208/120. Get the latest information on grounding and proper power configurations by this panel of industry veterans
Thursday, October 29, 11:00 am — 12:30 pm (S-Foyer 1)
Poster: P3 - Transducers/Perception
P3-1 Predicting the Acoustic Power Radiation from Loudspeaker Cabinets: A Numerically Efficient Approach—Mattia Cobianchi, B&W Group Ltd. - West Sussex, UK; Martial Rousseau, B&W Group Ltd. - West Sussex, UK
Loudspeaker cabinets should not contribute at all to the total sound radiation but aim instead to be a perfectly rigid box that encloses the drive units. To achieve this goal, state of the art FEM software packages and Doppler vibro-meters are the tools at our disposal. The modeling steps covered in the paper are: measuring and fitting orthotropic material properties, including damping; 3D mechanical modeling with a curvilinear coordinates system and thin elastic layers to represent glue joints; scanning laser Doppler measurements and single point vibration measurements with an accelerometer. Additionally a numerically efficient post-processing approach used to extract the total radiated acoustic power and an example of what kind of improvement can be expected from a typical design optimization are presented.
Convention Paper 9367 (Purchase now)
P3-2 New Method to Detect Rub and Buzz of Loudspeakers Based on Psychoacoustic Sharpness—Tingting Zhou, Nanjing Normal University - Nanjing, Jiangsu, China; Ming Zhang, Nanjing Normal University - Nanjing, Jiangsu, China; Chen Li, Nanjing Normal University - Nanjing, Jiangsu, China
The distortion detection of loudspeakers has been researched for a very long time. Researchers are committed to finding an objective way to detect Rub and Buzz (R&B) in loudspeakers that is in line with human ear feelings. This paper applies the psychoacoustics to distortion detection of loudspeakers and describes a new method to detect the R&B based on the psychoacoustic sharpness. Experiments show, comparing with existing objective detection methods of R&B, detection results based on the proposed method are more consistent with subjective judgments.
Convention Paper 9368 (Purchase now)
P3-3 Modal Impedances and the Boundary Element Method: An Application to Horns and Ducts—Bjørn Kolbrek, Norwegian University of Science and Technology - Trondheim, Norway
Loudspeaker horns, waveguides, and other ducts can be simulated by general numerical methods, like the Finite Element or Boundary Element Methods (FEM or BEM), or by a method using a modal description of the sound field, called the Mode Matching Method (MMM). BEM and FEM can describe a general geometry but are often computationally expensive. MMM, on the other hand, is fast, easily scalable, requires no mesh generation and little memory but can only be applied to a limited set of geometries. This paper shows how BEM and MMM can be combined in order to efficiently simulate horns where part of the horn must be described by a general meshed geometry. Both BEM-MMM and MMM-BEM couplings are described, and examples given.
Convention Paper 9369 (Purchase now)
P3-4 Audibility Threshold of Auditory-Adapted Exponential Transfer-Function Smoothing (AAS) Applied to Loudspeaker Impulse Responses—Florian Völk, Technische Universität München - München, Germany; WindAcoustics UG (haftungsbeschränkt) - Windach, Germany; Yuliya Fedchenko, Technische Universität München - Munich, Germany; Hugo Fastl, Technical University of Munich - Munich, Germany
A reverberant acoustical system’s transfer function may show deep notches or pronounced peaks, requiring large linear amplification in the play-back system when used, for example, in auralization or for convolution reverb. It is common practice to apply spectral smoothing, with the aim of reducing spectral fluctuation without degrading auditory-relevant information. A procedure referred to as auditory-adapted exponential smoothing (AAS) was proposed earlier, adapted to the spectral properties of the hearing system by implementing frequency-dependent smoothing bandwidths. This contribution presents listening experiments aimed at determining the audibility threshold of auditory-adapted exponential smoothing, which is the maximum amount of spectral smoothing allowed without being audible. As the results depend on the specific acoustic system, parametrization guidelines are proposed.
Convention Paper 9371 (Purchase now)
P3-5 Developing a Timbrometer: Perceptually-Motivated Audio Signal Metering—Duncan Williams, University of Plymouth - Devon, UK
Early experiments suggest that a universally agreed upon timbral lexicon is not possible, and nor would such a tool be intrinsically useful to musicians, composers, or audio engineers. Therefore the goal of this work is to develop perceptually-calibrated metering tools, with a similar interface and usability to that of existing loudness meters, by making use of a linear regression model to match large numbers of acoustic features to listener reported timbral descriptors. This paper presents work towards a proof-of-concept combination of acoustic measurement and human listening tests in order to explore connections between 135 acoustic features and 3 timbral descriptors, brightness, warmth, and roughness.
Convention Paper 9372 (Purchase now)
P3-6 A Method of Equal Loudness Compensation for Uncalibrated Listening Systems—Oliver Hawker, Birmingham City University - Birmingham, UK; Yonghao Wang, Birmingham City University - Birmingham, UK
Equal-loudness contours represent the sound-pressure-level-dependent frequency response of the auditory system, which implies an arbitrary change in the perceived spectral balance of a sound when the sound-pressure-level is modified. The present paper postulates an approximate proportional relationship between loudness and sound-pressure-level, permitting relative loudness modification of an audio signal while maintaining a constant spectral balance without an absolute sound-pressure-level reference. A prototype implementation is presented and accessible at [1]. Preliminary listening tests are performed to demonstrate the benefits of the described method.
Convention Paper 9373 (Purchase now)
Thursday, October 29, 12:00 pm — 12:45 pm (Room 1A07)
Engineering Brief: EB1 - Transducers—Part 1
Chair:
Michael Smithers, Dolby Laboratories - Sydney, NSW, Australia
EB1-1 Wireless Speaker Synchronization: Solved—Simon Forrest, Imagination Technologies - Hertfordshire, UK
Many high-end stereo systems offer the opportunity to connect several speakers together wirelessly to create a multi-room audio experience. However, linking speakers wirelessly to create stereo pairs or surround sound systems is technically challenging, due to the extremely tight synchronization necessary to accurately reproduce a faithful sound stage and maintain channel separation. Imagination measures several competing technologies on the market today and illustrates how innovative application of Wi-Fi networking protocols in audio chips can deliver several orders of magnitude improvement, creating opportunity for high quality wireless audio and producing results that are indistinguishable from wired speaker systems.
Engineering Brief 202 (Download now)
EB1-2 Multiphysical Simulation Methods for Loudspeakers—Advanced CAE-Based Simulations of Vibration Systems—Alfred Svobodnik, Konzept-X GmbH - Karlsruhe, Germany; Roger Shively, JJR Acoustics, LLC - Seattle, WA, USA; Marc-Olivier Chauveau, Moca Audio - Tours, France; Tommaso Nizzoli, Acoustic Vibration Consultant - Reggio Emilia, Italy; Dieter Thöres, Konzept-X GmbH - Karlsruhe, Germany
This is the second in a series of papers on the details of loudspeaker design using multiphysical computer aided engineering simulation methods. In this paper the simulation methodology for accurately modeling the structural dynamics of loudspeaker’s vibration systems will be presented. Primarily, the calculation of stiffness, or its inverse the compliance in the virtual world, will be demonstrated. Furthermore, the predictive simulation of complex vibration patterns, e.g., rocking or break-up, will be shown. Finally the simulation of coupling effects to the motor system will be discussed. Results will be presented, correlating the simulated model results to the measured physical parameters. From that, the important aspects of the modeling that determine its accuracy will be discussed.
Engineering Brief 203 (Download now)
EB1-3 New Design Methodologies in Mark Levinson Amps—Todd Eichenbaum, Harman Luxury Audio - Shelton, CT, USA
The HARMAN Luxury Audio electronics engineering team has designed a completely new generation of Mark Levinson amplifiers. Combining tried-and-true technologies with innovative implementations and unique
improvements has yielded products with exemplary measured and subjective performance. In this article are circuit design highlights of the No536, a fully balanced mono power amplifier rated for 400W/8 ohms and 800W/4 ohms.
Engineering Brief 204 (Download now)
Thursday, October 29, 2:15 pm — 3:15 pm (Room 1A14)
Networked Audio: N1 - Basic Networking and Layer 3 - Protocols: Layers, Models? A Disambiguation in the Context of Audio over IP
Presenter:Kieran Walsh, Audinate Pty. Ltd. - Ultimo, NSW, Australia
Abstract:
The OSI model is a great starting point to understand a structure for integrating network protocols and creating software. Topics for discussion include: • Examining the positives of a layered approach and fill in the “missing gaps” that are required to create a real implementation. • An implementation from a “solution provider” (manufacturer) is different to creating a real “on the ground” working full system – the Model and layered approach however can be valuable in converging these two challenges. • “Protocols, Standards, implementations” These terms are used interchangeably—they, however, have distinct meanings; we will examine the differences and distinctions of these terms. • Deploying core AoIP services in the context of other technologies that can be leveraged to make a fully working system function in an effective production environment. • Distinguishing between standards, implementations, transports, protocols, layers and have a better insight into what each means and how to define requirements for systems. • Understanding the IT centric approach to a network, and identify challenges and workarounds when deploying an AoIP system. • Understanding some techniques that “come for free” in an enterprise IT network environment
Thursday, October 29, 2:30 pm — 5:30 pm (Room 1A08)
Paper Session: P4 - Transducers—Part 1: Headphones, Amplifiers, and Microphones
Chair:
Christopher Struck, CJS Labs - San Francisco, CA, USA; Acoustical Society of America
P4-1 Headphone Response: Target Equalization Trade-offs and Limitations—Christopher Struck, CJS Labs - San Francisco, CA, USA; Acoustical Society of America; Steve Temme, Listen, Inc. - Boston, MA, USA
The effects of headphone response and equalization are examined with respect to the influence on perceived sound quality. Free field, diffuse field, and hybrid real sound field targets are shown and objective response data for a number of commercially available headphones are studied and compared. Irregular responses are examined to determine the source of response anomalies, whether these can successfully be equalized and what the limitations are. The goal is to develop a robust process for evaluating and appropriately equalizing headphone responses to a psychoacoustically valid target and to understand the constraints.
Convention Paper 9374 (Purchase now)
P4-2 A Headphone Measurement System Covers both Audible Frequency and beyond 20 kHz—Naotaka Tsunoda, Sony Corporation - Shinagawa-ku, Tokyo, Japan; Takeshi Hara, Sony Corporation - Tokyo, Japan; Koji Nageno, Sony Corporation - Tokyo, Japan
New headphone measurement system consisting of a 1/8” microphone and newly developed HATS (Head And Torso Simulator) with a coupler that have realistic ear canal shape is proposed to enable entire frequency response measurement from audible frequency and higher frequency area up to 140 kHz. At the same time a new frequency response evaluation scheme based on HRTF correction is proposed. Measurement results obtained by this scheme enables much better understanding by enabling direct comparison with free field loudspeaker frequency response.
Convention Paper 9375 (Purchase now)
P4-3 Measurements of Acoustical Speaker Loading Impedance in Headphones and Loudspeakers—Jason McIntosh, McIntosh Applied Engineering - Eden Prairie, MN, USA
The acoustical design of two circumaural headphones and a desktop computer speaker have been studied by measuring the acoustical impedance of the various components in their design. The impedances were then used to build an equivalent circuit model for the devices that then predicted their pressure response. There was seen to be good correlation between the model and measurements. The impedance provides unique insight into the acoustic design that is not observed though electrical impedance or pressure response measurements that are commonly relied upon when designing such devices. By building models for each impedance structure, it is possible to obtain an accurate model of the whole system where the effects of each component upon the device's overall performance can be seen.
Convention Paper 9376 (Purchase now)
P4-4 Efficiency Investigation of Switch-Mode Power Audio Amplifiers Driving Low Impedance Transducers—Niels Elkjær Iversen, Technical University of Denmark - Lyngby, Denmark; Henrik Schneider, Technical University of Denmark - Kgs. Lyngby, Denmark; Arnold Knott, Technical University of Denmark - Kgs. Lyngby, Denmark; Michael A. E. Andersen, Technical University of Denmark - Kgs. Lyngby, Denmark
The typical nominal resistance span of an electro dynamic transducer is 4 Ohms to 8 Ohms. This work examines the possibility of driving a transducer with a much lower impedance to enable the amplifier and loudspeaker to be directly driven by a low voltage source such as a battery. A method for estimating the amplifier rail voltage requirement as a function of the voice coil nominal resistance is presented. The method is based on a crest factor analysis of music signals and estimation of the electrical power requirement from a specific target of the sound pressure level. Experimental measurements confirm a huge performance leap in terms of efficiency compared to a conventional battery-driven sound system. Future optimization of low voltage, high current amplifiers for low impedance loudspeaker drivers are discussed.
Convention Paper 9377 (Purchase now)
P4-5 Self-Oscillating 150 W Switch-Mode Amplifier Equipped with eGaN-FETs—Martijn Duraij, Technical University of Denmark - Lyngby, Denmark; Niels Elkjær Iversen, Technical University of Denmark - Lyngby, Denmark; Lars Press Petersen, Technical University of Denmark - Kgs. Lyngby, Denmark; Patrik Boström, Bolecano Holding AB - Helsingborg, Sweden
Where high-frequency clocked system switch-mode audio power amplifiers equipped with eGaN-FETs have been introduced in the past years, a novel self-oscillating eGaN-FET equipped amplifier is presented. A 150 Wrms amplifier has been built and tested with regard to performance and efficiency with an idle switching frequency of 2 MHz. The amplifier consists of a power-stage module with a self-oscillating loop and an error-reducing global loop. It was found that an eGaN-FET based amplifier shows promising potential for building high power density audio amplifiers with excellent audio performance. However care must be taken of the effects caused by a higher switching frequency.
Convention Paper 9378 (Purchase now)
P4-6 Wind Noise Measurements and Characterization Around Small Microphone Ports—Jason McIntosh, Starkey Hearing Technologies - Eden Prairie, MN, USA; Sourav Bhunia, Starkey Hearing Technologies - Eden Prairie, MN, USA
The physical origins of microphone wind noise is discussed and measured. The measured noise levels are shown to correlate well to theoretical estimates of non-propagating local fluid dynamic turbulence pressure variations called “convective pressure.” The free stream convective pressure fluctuations may already be present in a flow independent of its interactions with a device housing a microphone. Consequently, wind noise testing should be made in turbulent air flows rather than laminar. A metric based on the Speech Intelligibility Index (SII) is proposed for characterizing wind noise effects for devices primarily designed to work with speech signals, making it possible to evaluate nonlinear processing effects on reducing wind noise on microphones.
Convention Paper 9379 (Purchase now)
Thursday, October 29, 3:30 pm — 4:30 pm (Room 1A14)
Networked Audio: N2 - AVB/TSN Ethernet Is Built-In Everywhere Now; How Do You Make the Most of It? A System Implementation Primer for Consultants and Tech Managers
Chair:Tim Shuttleworth, Renkus Heinz - Oceanside, CA, USA
Panelists:
Richard Bugg, Meyer Sound - North HIlls, CA, USA
Jim Cooper, MOTU - Cambridge, MA, USA
Tom Knesel, Pivitec - Emmaus, PA, USA
Nathan Phillips, Coveloz Technologies - Ottawa, ON, Canada
Curtis Rex Reed, Harman - South Jordan, Utah
Abstract:
This presentation will introduce technology managers, integrators, and specifiers to the basics of distributing audio, video, and control signals over an Ethernet network in ready-to-play fashion. The presentation will also focus on system implementation with Time Sensitive Networking (TSN) standards—the evolution of Audio Video Bridging (AVB).
Attendees will be provided with a system-level understanding on how to achieve networked AV success; discuss the advantages of using a network; and overview challenges and approaches and provide tips and troubleshooting for networking with AVB/TSN. Discover how easy it is to scale and upgrade TSN systems.
An overview of the methods of time synchronization will also be outlined. AVnu Alliance will start by reviewing system requirements for demanding applications such as performance venue installs, house of worship, large convention systems, conference rooms and broadcast and discuss the Ethernet capabilities needed for the network including characteristics and definitions of TSN for these applications.
The presentation will highlight the importance of certification for interoperability. Finally, AVnu Alliance will present the existing tools and resources that designers need for successful TSN system operation.
Learning Objectives: • Gain a basic understanding of distributing audio, video and control systems over an Ethernet network and the advantages to doing so. • Understand the existing tools and resources that designers need to successfully operate TSN systems. • Understand what is required from a network for applications such as performance venue installs, houses of worship, conference rooms etc.
Thursday, October 29, 4:00 pm — 5:30 pm (Room 1A13)
Product Development: PD4 - Electrical and Mechanical Measurement of Sound System Equipment
Presenter:Wolfgang Klippel, Klippel GmbH - Dresden, Germany
Abstract:
This tutorial explains the physical background and practical motivation for a new measurement standard replacing the IEC 60268-5 applicable to all kinds of transducers, loudspeakers, and other sound reproduction systems. The focus are electrical and mechanical measurements (part B) complementing the acoustical measurements (part A) presented at the 137th AES convention in LA last year. Voltage and current measured at the electrical terminals provide not only the electrical input impedance but also meaningful parameters of linear, nonlinear, and thermal models describing the behavior of the transducer in the small and large signal domain. This standard addresses long-term testing to assess power handling, heating process, product reliability, and climate impact. New mechanical characteristics are derived from laser scanning techniques that are the basis for modal analysis of cone vibration and predicting the acoustical output.
Thursday, October 29, 5:15 pm — 7:00 pm (Room 1A12)
Live Sound Seminar: LS2 - CANCELED
Thursday, October 29, 5:30 pm — 6:30 pm (Room 1A08)
Engineering Brief: EB2 - Spatial Audio
Chair:
Bryan Martin, McGill University - Montreal, QC, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, QC, Canada
EB2-1 Array-Based HRTF Pattern Emulation for Auralization of 3D Outdoor Sound Environments with Direction-Based Muffling of Sources—Pieter Thomas, Ghent University - Gent, Belgium; Timothy Van Renterghem, Ghent University - Ghent, Belgium; Dick Botteldooren, Ghent University - Ghent, Belgium
The use of spatial audio reproduction techniques is widely employed for the subjective analysis of concert halls and, more recently, complex outdoor sound environments. In this work a binaural reproduction technique is developed based on a 32-channel spherical microphone array, optimized for the simulation of a virtual microphone with directional characteristics that approximate the directivity of the human head. A set of weights is calculated for each microphone of the constituting array based on a regularized least-square solution. This technique allows for adaptation of the auditory scene based on source direction. The performance of variants of the technique has been evaluated by means of listening tests. Furthermore, its use for the auralization of outdoor soundscapes has been illustrated.
Engineering Brief 205 (Download now)
EB2-2 Polar Pattern Comparisons for the Left, Center, and Right Channels in a 3-D Microphone Array—Margaret Luthar, Sonovo Mastering - Stavanger, Norway; Elaine Maltezos, University of Stavanger - Bergen, Norway
Standard 5.1 microphone arrays are long established and have been applied to psychoacoustic research, as well as for commercial purposes in film and music. Recent interest in the creative possibilities of “3-D audio” (a lateral layer of microphones, as well as an additional height layer) has led to research in both adapting 5.1 arrays for 3-D recordings as well as creating new methods to better capture the listener’s experience. The LCR configuration in a 5.1 array is a factor that contributes to the stability and localization of the auditory image in the horizontal plane. In this experiment, two different LCR configurations have been adapted for 9.1 in a traditional concert-recording environment. They are then compared in various combinations for their ability to produce a stable, natural, and effective frontal image in a 9.1 reproduction method. Preliminary listening suggests that the polar characteristics of the L,C, and R microphones do affect the sense of envelopment, spaciousness, and localization of the frontal image, as well as cohesiveness within the entire 9.1 image. These results have led to options for further study, as suggested by the researchers.
Engineering Brief 206 (Download now)
EB2-3 Coding Backward Compatible Audio Objects with Predictable Quality in a Very Spatial Way—Stanislaw Gorlow, Gorlow Brainworks - Paris, France
A gradual transition from channel-based to object-based audio can currently be observed throughout the film and the broadcast industries. One paramount example of this trend is the new MPEG-H 3D Audio standard, which is under development. Other object-based standards in the market place are DTS:X and Dolby Atmos. In this engineering brief a newly developed prototype of an object-based audio coding system is introduced and discussed in terms of its technical characteristics. The codec can be of use everywhere where a given sound scene is to be rerendered according to the listener’s preference or environment in a backward compatible manner. The areas of application cover not only interactive music listening or remixing, but also location-dependent, immersive, and 3D audio rendering.
Engineering Brief 207 (Download now)
EB2-4 Decorrelated Audio Imaging in Radial Virtual Reality Environments—Bryan Dalle Molle, University of Illinois at Chicago - Chicago, IL, USA; James Pinkl, University of Illinois at Chicago - Chicago, IL, USA; Mark Blewett, University of Illinois at Chicago - Chicago, IL, USA
University of Illinois at Chicago's CAVE2 is a large-scale, 320-degree radial visualization environment with a 360-degree 20.2 channel radial speaker system. The purpose of our research is to develop solutions for spatially accurate playback of audio within a virtual reality environment, reconciling differences between the circular speaker array, the location of a user in the physical space, and the location of virtual sound objects within CAVE2’s OmegaLib virtual reality software, all in real time. Previous research presented at AES 137 detailed our work on object geometry, dynamically mapping a virtual object’s width and distance to the speaker array with volume and delay compensation. Our recent work improves virtual width perception using dynamic decorrelation with transient fidelity, implemented via Supercollider on the CAVE2 sound server.
Engineering Brief 208 (Download now)
Friday, October 30, 9:00 am — 10:30 am (Room 1A10)
Broadcast and Streaming Media: B4 - Audio and IP: Are We There Yet?
Moderator:Steve Lampen, Belden - San Francisco, CA, USA
Panelists:
Kevin Gross, AVA Networks - Boulder, CO, USA
David Josephson, Josephson Engineering, Inc. - Santa Cruz, CA, USA
Dan Mortensen, Dansound Inc. - Seattle, WA, USA
Tony Peterle, Worldcast Systems - Miami, FL, USA
Tim Pozar, Fandor - San Francisco, CA, USA
Abstract:
In 2010, Reed Hundt, former head of the Federal Communications Commission, said in a speech at Columbia Business School, [We] “decided in 1994 that the Internet should be the common medium in the United States and broadcast should not be.” This was twenty-one years ago. So, are we there yet? I tried to invite Mr. Hundt to participate on this panel, but he is too well protected, I couldn’t even get an invitation to him.
This panel of esteemed experts will look at the “big picture” of audio in networked formats and internet delivery systems. Do we have the hardware and software we need? If not, what is missing? Can we expect the same quality, consistency, and reliability as we had in the old analog audio days? There are dozens, maybe hundreds, of companies using proprietary Layer 2/Layer 3 Ethernet for audio, and there is much work on combining or cross-fertilizing these systems, such as Dante and Ravenna. There are also new standards such as IEEE 802.1BA-2011 AVB (Audio-Video Bridging), and IEEE 802.1ASbt TSN (Time-Sensitive Networks) that use specialized Ethernet switches in a network architecture. But these do not address anything outside of the Ethernet network itself. Then we have AES IP67, specifically looking at “high performance” IP-based audio.
Mixed in with this is the question “What is a broadcaster? Do you have to have a transmitter to be a broadcaster?” Consider that next year (2016) one company claims they will be the largest broadcaster in the world, and that company is Netflix.
Friday, October 30, 9:00 am — 12:30 pm (Room 1A08)
Paper Session: P6 - Transducers—Part 2: Loudspeakers
Chair:
Sean Olive, Harman International - Northridge, CA, USA
P6-1 Wideband Compression Driver Design, Part 1: A Theoretical Approach to Designing Compression Drivers with Non-Rigid Diaphragms—Jack Oclee-Brown, GP Acoustics (UK) Ltd. - Maidstone, UK
This paper presents a theoretical approach to designing compression drivers that have non-rigid radiating diaphragms. The presented method is a generalization of the Smith "acoustic mode balancing" approach to compression driver design that also considers the modal behavior of radiating structure. It is shown that, if the mechanical diaphragm modes and acoustical cavity modes meet a certain condition, then the diaphragm non-rigidity is not a factor that limits the linear driver response. A theoretical compression driver design approximately meeting this condition is described and it's performance evaluated, using FEM models.
Convention Paper 9386 (Purchase now)
P6-2 Time/Phase Behavior of Constant Beamwidth Transducer (CBT) Circular-Arc Loudspeaker Line Arrays—D.B. (Don) Keele, Jr., DBK Associates and Labs - Bloomington, IN, USA
This paper explores the time and phase response of circular-arc CBT arrays through simulation and measurement. Although the impulse response of the CBT array is spread out in time, it’s phase response is found to be minimum phase at all locations in front of the array: up-down, side-to-side, and near-far. When the magnitude response is equalized flat with a minimum-phase filter, the resultant phase is substantially linear phase over a broad frequency range at all these diverse locations. This means that the CBT array is essentially time aligned and linear phase and as a result will accurately reproduce square waves anywhere within its coverage. Accurate reproduction of square waves is not necessarily audible but many people believe that it is an important loudspeaker characteristic. The CBT array essentially forms a virtual point-source but with the extremely-uniform broadband directional coverage of the CBT array itself. When the CBT array is implemented with discrete sources, the impulse response mimics a FIR filter but with non-linear sample spacing and with a shape that looks like a roller coaster track viewed laterally. An analysis of the constant-phase wave fronts generated by a CBT array reveals that the sound waves essentially radiate from a point that is located at the center of curvature of the array’s circular arc and are essentially circular at all distances, mimicking a point source.
Convention Paper 9387 (Purchase now)
P6-3 Progressive Degenerate Ellipsoidal Phase Plug—Charles Hughes, Excelsior Audio - Gastonia, NC, USA; AFMG - Berlin, Germany
This paper will detail the concepts and design of a new phase plug. This device can be utilized to transform a circular planar wave front to a rectangular planar wave front. Such functionality can be very useful for line array applications as well as for feeding the input, or throat section, of a rectangular horn from the output of conventional compression drivers. The design of the phase plug allows for the exiting wave front to have either concave or convex curvature if a planar wave front is not desired. One of the novel features of this device is that there are no discontinuities within the phase plug.
Convention Paper 9388 (Purchase now)
P6-4 Low Impedance Voice Coils for Improved Loudspeaker Efficiency—Niels Elkjær Iversen, Technical University of Denmark - Lyngby, Denmark; Arnold Knott, Technical University of Denmark - Kgs. Lyngby, Denmark; Michael A. E. Andersen, Technical University of Denmark - Kgs. Lyngby, Denmark
In modern audio systems utilizing switch-mode amplifiers the total efficiency is dominated by the rather poor efficiency of the loudspeaker. For decades voice coils have been designed so that nominal resistances of 4 to 8 Ohms is obtained, despite modern audio amplifiers, using switch-mode technology, can be designed to much lower loads. A thorough analysis of the loudspeaker efficiency is presented and its relation to the voice coil fill factor is described. A new parameter, the drivers mass ratio, is introduced and it indicates how much a fill factor optimization will improve a driver’s efficiency. Different voice coil winding layouts are described and their fill factors analyzed. It is found that by lowering the nominal resistance of a voice coil, using rectangular wire, one can increase the fill factor. Three voice coils are designed for a standard 10” woofer and corresponding frequency responses are estimated. For this woofer it is shown that the sensitivity can be improved approximately 1 dB, corresponding to a 30% efficiency improvement, just by increasing the fill factor using a low impedance voice coil with rectangular wire.
Convention Paper 9389 (Purchase now)
P6-5 Effectiveness of Exotic Vapor-Deposited Coatings on Improving the Performance of Hard Dome Tweeters—Peter John Chapman, Harman - Denmark; Bang & Olufsen Automotive
The audio industry is constantly striving for new and different methods with which to improve the sound quality and performance of components in the signal chain. In many cases however, insufficient evidence is provided for the benefit of so-called improvements. This paper presents the results of a scientific study to analyze the effectiveness of applying vapor-deposited diamond-like-carbon, chromium, and chromium nitride coatings to aluminum and titanium hard dome tweeters. Careful attention was paid during the processing, assembly, and measurement of the tweeters to ensure a control and equal influence of other factors such that a robust analysis could be made. The objective results were supplemented with listening tests between the objectively most significant change and the control.
Convention Paper 9390 (Purchase now)
P6-6 Wideband Compression Driver Design. Part 2, Application to a High Power Compression Driver with a Novel Diaphragm Geometry—Mark Dodd, Celestion - Ipswich, Suffolk, UK
Performance limitations of high-power wide-bandwidth conventional and co-entrant compression drivers are briefly reviewed. An idealized co-entrant compression driver is modeled and acoustic performance limitations discussed. The beneficial effect of axisymmetry is illustrated using results from numerical models. Vibrational behavior of spherical-cap, conical, and bi-conical diaphragms are compared. Axiperiodic membrane geometries consisting of circular arrays of features are discussed. This discussion leads to the conclusion that, for a given feature size, annular axiperiodic diaphragms have vibrational properties mostly dependent on the width of the annulus rather than it's diameter. Numerically modeled and measured acoustic performance of a high-power wide-bandwidth compression driver using an annular axiperiodic membrane, with vibrational and acoustic modes optimized, is discussed.
Convention Paper 9391 (Purchase now)
P6-7 Dual Diaphragm Asymmetric Compression Drivers—Alexander Voishvillo, JBL/Harman Professional - Northridge, CA, USA
A theory of dual compression drivers was described earlier and the design was implemented in several JBL Professional loudspeakers. This type of driver consists of two motors and two annular diaphragms connected through similar phasing plugs to the common acoustical load. The new concept is based as well on two motors and acoustically similar phasing plugs but the diaphragms are mechanically “tuned” to different frequency ranges. Summation of acoustical signals on common acoustical load provides extended frequency range compared to the design with identical diaphragms. Theoretically maximum overall SPL sensitivity is achieved by the in-phase radiation of the diaphragms. Principles of operation of the new dual asymmetric driver are explained using a combination of matrix analysis, finite elements analysis, and data obtained from a scanning vibrometer and the electroacoustic measurements are presented. Comparison of the performance of these dual drivers and the earlier fully symmetric designs is provided.
Convention Paper 9392 (Purchase now)
Friday, October 30, 10:30 am — 11:30 am (Room 1A14)
Networked Audio: N3 - AVB/TSN Implementation for Live Sound & House of Worship
Presenter:Tom Knesel, Pivitec - Emmaus, PA, USA
Abstract:
Ethernet AVB/TSN (Time Sensitive Networking) enables precise timing and synchronization and bandwidth reservation, making it an ideal solution for several of the consistency and convenience issues musicians’ face on the road and during live performances. International rock band, ACCEPT, has been touring the globe for over three decades, and for most of that time, they had to lug heavy performance equipment onto planes, trains and taxis or take the risk of using unfamiliar local equipment at each venue. They needed a solution that eliminated some gear and ensured a consistent sound and performance at each venue. That’s where a compact touring system, powered by Audio Video Bridging (AVB) stepped in.
For Houses of Worship, similar solutions can be implemented. NOW Church in Ocala, FL, has always incorporated cutting-edge technology into their facility, but they were looking for a system that would take them into the future. After hearing about the benefits of AVB they made the move to an AVID VENUE system for their Front of House, as well as the Pivitec personal monitoring system that they have described as a “game-changer.”
Tom Knesel, Co-Founder and President of Pivitec, will walk through the specifics of the AVB enabled systems for each install including lessons learned and how AVB was monumental in providing a powerful experience during ACCEPT’s and NOW Church’s performances. Knesel will present how AVB allowed these two installations to combat common sound issues on stage, create pre-sets, simplify travel, and most importantly, give them a future-proof way to take advantage of next-gen compatibility with hardware and software from other manufacturers.
Friday, October 30, 11:00 am — 11:45 am (Stage LSE)
Live Sound Expo: Theatrical Vocal Miking
Presenters:Ken Travis
Jim van Bergen
Abstract:
In theatrical vocal applications, mics should largely be heard and not seen. Our session covers the practical issues of reproducing song and voice from the stage, including body mic dressing, use of omnis vs. directional polar patterns, earset vs. hairline mics.
AES Members can watch a video of this session for free.
Friday, October 30, 12:00 pm — 12:45 pm (Stage LSE)
Live Sound Expo: Wireless Issues for Live Theater: Broadway and Beyond
Moderator:Karl Winkler, Lectrosonics - Rio Rancho, NM, USA
Panelists:
Christopher Evans, The Benedum Center - Pittsburgh, PA, USA
Simon Matthews
Abstract:
Manhattan’s Broadway represents one of the most hostile environments imaginable for wireless microphone use. How do sound designers and system engineers cope with the RF soup that fills the ether in “The Great White Way,” and what lessons learned can be applied to theater applications in general? This session will offer answers.
AES Members can watch a video of this session for free.
Friday, October 30, 12:00 pm — 1:00 pm (Room 1A07)
Engineering Brief: EB3 - Transducers—Part 2
Chair:
Michael Smithers, Dolby Laboratories - Sydney, NSW, Australia
EB3-1 Dual Filtering Technique for Speech Signal Enhancements—Mahdi Ali, Hyundai America Technical Center, Inc. - Superior Township, MI, USA
Hands free applications in automobiles, such as voice recognition and Bluetooth communications, have been one of the great features added to vehicles’ infotainment systems. However, cabin and road noises degrade audio quality and negatively impact consumers’ experience. Research has been conducted and several noise reduction techniques have been proposed. However, due to the complexity of noise environments inside and outside vehicles, the hands free sounds quality still poses an issue for consumers. This continuously existed problem requires more research in this field. This paper proposes a novel technique to reduce noise and enhance voice recognition and Bluetooth audio quality in vehicles’ hands free applications. It utilizes the dual Kalman Filtering (KF) technique to suppress noise. This method has been validated using a MATLAB/SIMULINK simulation environment, which showed improvements in noise reductions in both Gaussian and non-Gaussian environments.
Engineering Brief 209 (Download now)
EB3-2 Implementation of Segmented Circular-Arc Constant Beamwidth Transducer (CBT) Loudspeaker Arrays—D.B. (Don) Keele, Jr., DBK Associates and Labs - Bloomington, IN, USA
Circular-arc loudspeaker line arrays composed of multiple loudspeaker sources are used very frequently in loudspeaker applications to provide uniform vertical coverage [1, 2, and 4]. To simplify these arrays, the arrays may be formed using multiple straight-line segments or individual straight-line arrays. This approximation has errors because some of the speakers are now no longer located on the circular arc and exhibit a “bulge error.” This error decreases as the number of segments increase or the splay angle of an individual straight segment is decreased.
The question is: How small does the segment splay angle have to be so that the overall performance is not compromised compared to the non-segmented version of the array? Based on two simple spacing limitations that govern the upper operating frequency for each type of array, this paper shows that the bulge deviation should be no more than about one-fourth the center-to-center spacing of the sources located on each straight segment and that surprisingly, the maximum splay angle and array radius depends only on the number (N) of equally-spaced sources that are on a straight segment. As the number of sources on a segment increases, the maximum segment splay angle decreases and the required minimum array radius of curvature increases. Design guidelines are presented that allow the segmented array to have nearly the same performance as the accurate circular arc array.
Engineering Brief 210 (Download now)
EB3-3 Speech Intelligibility Advantages Using an Acoustic Beamformer Display—Durand Begault, NASA Ames Research Center - Moffet Field, CA, USA; Kaushik Sunder, NASA Ames Research Center - Moffet Field, CA, USA; San Jose State University Foundation - San Jose, CA, USA; Martine Godfroy, NASA Ames Research Center - Moffett Field, CA, USA; San Jose State University Foundation; Peter Otto, UC San Diego - La Jolla, CA, USA
A speech intelligibility test conforming to the Modified Rhyme Test of ANSI S3.2 “Method for Measuring the Intelligibility of Speech Over Communication Systems” was conducted using a prototype 12-channel acoustic beamformer system. The target speech material (signal) was identified against speech babble (noise), with calculated signal-noise ratios of 0, 5 and 10 dB. The signal was delivered at a fixed beam orientation of 135 degrees (re 90 degrees as the frontal direction of the array) and the noise at 135 (co-located) and 0 degrees (separated). A significant improvement in intelligibility from 58% to 75% was found for spatial separation for the same signal-noise ratio (0 dB). Significant effects for improved intelligibility due to spatial separation were also found for higher signal-noise ratios.
Engineering Brief 211 (Download now)
EB3-4 Designing Near-Field MVDR Acoustic Beamfomers for Voice User Interfaces—Andrew Stanford-Jason, XMOS Ltd. - Bristol, UK
We present an analysis and design recommendations for a reduced computational complexity minimum variance distortionless response(MVDR) beamforming microphone. An overview of MVDR beamforming is given then decomposed into a generalized implementation to aid mapping to a microcontroller with some discussion of optimizations for real-world performance.
Engineering Brief 212 (Download now)
Friday, October 30, 1:00 pm — 1:45 pm (Stage LSE)
Live Sound Expo: Theater Sound System Design and Optimization
Presenters:Andrew Keister, AKD - New York, NY, USA
Bob McCarthy, Meyer Sound Labs - New York, NY, USA
Abstract:
Theater sound designers can face architectural and aesthetic concerns within a given facility, audio content that ranges from dialog heavy drama to rocking reviews and a blend of live and recorded elements. Seasoned veterans of theatrical sound design will share their experience.
AES Members can watch a video of this session for free.
Friday, October 30, 1:45 pm — 2:30 pm (Stage PSE)
Project Studio Expo: Mix and Mastering Optimized for Streaming
Presenter:Thomas Lund, Genelec Oy - Iisalmi, Finland
Abstract:
Loudness-based normalization in distribution is a game-changer. Even the resulting loudness at the consumer now follows an inverted U curve as mastering levels are cranked up. Free tools for equal-loudness comparisons are shown and new streaming recommendations from AES and EBU are summarized. The session includes listening examples and tips for optimized delivery that will stand the test of time.
AES Members can watch a video of this session for free.
Friday, October 30, 1:45 pm — 3:15 pm (Room 1A10)
Broadcast and Streaming Media: B6 - Production of A Prairie Home Companion
Moderator:John Holt, Retired
Panelists:
Samuel Hudson, Producer/Technical Director - St. Paul, Minnesota USA
Nick Kereakos
Thomas Scheuzger, Broadcast/Transmission Engineer - Saint Paul, MN, USA
Abstract:
“From the control board at the Orpheum, PHC travels via underground phone lines to the tiny Satellite Control Room on the fourth floor of Minnesota Public Radio, from there by cable to MPR’s transmitting dish in a junkyard on the East Side of Saint Paul, and from there 22,300 miles to Western Union’s Westar IV satellite…”
A lot has changed technically since that was written over 30 years ago for the 10th anniversary of "A Prairie Home Companion." You’ll hear a little history and a lot about how the technology has changed and will change the production and distribution of this iconic radio program.
Friday, October 30, 2:00 pm — 2:45 pm (Stage LSE)
Live Sound Expo: Theatrical Console Automation
Presenters:Richard Ferriday, Cadac consoles - UK
Jason Crystal
Matt Larson, DiGiCo- Group One Limited National Sales Manager - Farmingdale, NY
Abstract:
Scene and snapshot storage and recall, working with time code, synchronizing with lighting and EFX—these are all among the components of modern theatrical audio production. This session examines the console automation utilized to help the show go on.
AES Members can watch a video of this session for free.
Friday, October 30, 2:00 pm — 3:45 pm (Room 1A12)
Live Sound Seminar: LS3 - Sound System Design and Optimization
Chair:Bob McCarthy, Meyer Sound Labs - New York, NY, USA
Panelists:
Jamie Anderson, Rational Acoustics LLC - Putnam, CT, USA
Andrew Keister, AKD - New York, NY, USA
Dominic Sack, Sound Associates, Inc.
Nevin Steinberg, Nevin Steinberg Sound Design - New York, NY, USA
Abstract:
Sound system design and tuning is a multi-step process that begins long before the pink noise can be heard. What are the steps and procedures taken to ensure a successful tuning? How are clients convinced to provide the time and resources to do this vital work? Prioritizing limited resources when time is short, and what can be done ahead of time.
Friday, October 30, 2:00 pm — 5:00 pm (Room 1A08)
Paper Session: P9 - Transducers—Part 3: Loudspeakers
Chair:
Peter John Chapman, Harman - Denmark; Bang & Olufsen Automotive
P9-1 A Model for the Impulse Response of Distributed-Mode Loudspeakers and Multi-Actuator Panels—David Anderson, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
Panels driven into transverse (bending) vibrations by one or more small force drivers are a promising alternative approach in loudspeaker design. A mechanical-acoustical model is presented here that enables computation of the acoustic transient response of such loudspeakers driven by any number of force transducers at arbitrary locations on the panel and at any measurement point in the acoustic radiation field. Computation of the on- and off-axis acoustic radiation from a panel confirms that the radiated sound is spatially diffuse. Unfortunately, this favorable feature of vibrating panel loudspeakers is accompanied by significant reverberant effects and such loudspeakers are poor at reproducing signals with rapid transients.
Convention Paper 9409 (Purchase now)
P9-2 Loudspeaker Rocking Modes (Part 1: Modeling)—William Cardenas, Klippel GmbH - Dresden, Germany; Wolfgang Klippel, Klippel GmbH - Dresden, Germany
The rocking of the loudspeaker diaphragm is a severe problem in headphones, micro-speakers, and other kinds of loudspeakers causing voice coil rubbing that limits the maximum acoustical output at low frequencies. The root causes of this problem are small irregularities in the circumferential distribution of the stiffness, mass, and magnetic field in the gap. A dynamic model describing the mechanism governing rocking modes is presented and a suitable structure for the separation and quantification of the three root causes exciting the rocking modes is developed. The model is validated experimentally for the three root causes and the responses are discussed conforming a basic diagnostics analysis.
Convention Paper 9410 (Purchase now)
P9-3 Active Transducer Protection Part 1: Mechanical Overload—Wolfgang Klippel, Klippel GmbH - Dresden, Germany
The generation of sufficient acoustical output by smaller audio systems requires maximum exploitation of the usable working range. Digital preprocessing of audio input signals can be used to prevent a mechanical or thermal overload generating excessive distortion and eventually damaging the transducer. The first part of two related papers focuses on the mechanical protection defining useful technical terms and the theoretical framework to compare existing algorithms and to develop meaningful specifications required for the adjustment of the protection system to the particular transducer. The new concept is illustrated with a micro-speaker and the data exchange and communication between transducer manufacturer, software provider, and system integrator are discussed.
Convention Paper 9411 (Purchase now)
P9-4 Horns Near Reflecting Boundaries—Bjørn Kolbrek, Norwegian University of Science and Technology - Trondheim, Norway
It is well known that when a sound source is placed near one or more walls, the power output increases due to the mutual coupling between the source and its image sources. This is reflected in an increase in the low frequency radiation resistance as seen by the sources. While direct radiating loudspeakers may benefit from this whenever the sources are within about a quarter wavelength of each other, horns will behave differently depending on if the increase in radiation resistance comes within the pass band of the horn or not. This has implications for the placement of corner horns. In this paper the Mode Matching Method (MMM) is used together with the modal mutual radiation impedance and the concept of image sources to compute the throat impedance and radiated sound pressure of horns placed near infinite, perpendicular reflecting boundaries. The MMM is compared with another numerical method, the Boundary Element Rayleigh Integral Method (BERIM), and with measurements and is shown to give good agreement with both. The MMM also has significantly shorter computation time than BERIM, making it attractive for use for the initial iterations of a design, or for optimization procedures.
Convention Paper 9412 (Purchase now)
P9-5 State-Space Modeling of Loudspeakers Using Fractional Derivatives—Alexander King, Technical University of Denmark - Kgs. Lyngby, Denmark; Finn T. Agerkvist, Technical University of Denmark - Kgs. Lyngby, Denmark
This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response of a fractional harmonic oscillator, representing the mechanical part of a loudspeaker, showing the effect of the fractional derivative and its relationship to viscoelasticity. Finally, a loudspeaker model with a fractional order viscoelastic suspension and fractional order voice coil is fit to measurement data. It is shown that the identified parameters can be used in a linear fractional order state-space model to simulate the loudspeakers’ time domain response.
Convention Paper 9413 (Purchase now)
P9-6 Comparative Static and Dynamic FEA Analysis of Single and Dual Voice Coil Midrange Transducers—Felix Kochendörfer, JBL/Harman Professional - Northridge, CA USA; Alexander Voishvillo, JBL/Harman Professional - Northridge, CA, USA
The concept of the dual coil direct-radiating loudspeakers have been known for several decades. JBL Professional pioneered in design and application of dual coil woofers and midrange loudspeakers. There are several properties of the dual coil transducers that differentiate them from the traditional single voice coil design. First is the better heat dissipation—the dual coil may be considered as a traditional coil slit in two parts and each one is positioned into its own magnetic gap. Second is the symmetry of the force factor (Bl product) versus position of the voice coils in their gaps. It is explained by the fact that one coil leaves its gap the other one on contrary enters its gap. These two features are well researched and described in literature [1,2]. Less is known about advantage of the dual coil transducers related to the flux modulation and dependence of the alternating magnetic flux (and corresponding voice coil inductance) on frequency, current, and voice coil positions. In this work comparison of a regular single coil design and dual coil configuration is carried out through dynamic magnetic FEA modeling and measurements.
Convention Paper 9414 (Purchase now)
Friday, October 30, 2:30 pm — 4:00 pm (Room 1A13)
Networked Audio: N5 - Dante Case Studies
Presenters:John Huntington, NYC College of Technology - Brooklyn, NY, USA
Sam Kusnetz, Team Sound - Brooklyn, NY, USA
Joe Patten, Communications Design Associates - Canton, MA, USA
Abstract:
Part 1: Using an Audio Network for a Themed Attraction in an Academic Environment Sound Designer Sam Kusnetz and Network Engineer John Huntington give an overview of the Dante network that is the backbone of the audio system for the Gravesend Inn haunted attraction at Citytech in downtown Brooklyn. Here are two learning points: • The benefits of networked audio in themed attractions • Using networked audio over managed networks.
Part 2: Cost Saving and Digital Audio Networking The use of digitized audio networks has changed the flow of information and cost associated with it for the better. More channels of audio are available in more location with the installation cost greatly reduced. Benefits: • Savings with infrastructure such as cabling and conduit • Flexibility with routes or multiple routes/distribution of audio • Density of audio paths, 128 channels over single link via CAT6
Friday, October 30, 3:00 pm — 3:45 pm (Stage LSE)
Live Sound Expo: (Audio) Networking for Theater
Presenter:Marc Brunke, Optocore GmbH - Grafelfing, Germany
Abstract:
As with audio infrastructure in general, digital audio networking is permeating the theater. We examine why audio networking is finding a natural fit into theatrical applications, and discuss the details of network implementation.
AES Members can watch a video of this session for free.
Friday, October 30, 4:00 pm — 4:45 pm (Stage LSE)
Live Sound Expo: Theatrical Sound Design
Presenter:Simon Matthews
Abstract:
Most every theatrical production starts from scratch for its sound design, an experimental progress developed and honed during pre-production and rehearsal. Sonic elements, textures and effects are hand-crafted throughout the process. Our presenters discuss their process working across development in DAWs and translation to the stage, including modern tools like plug-ins that can provide a time-saving predictive bridge between pre-production and a realized design.
AES Members can watch a video of this session for free.
Friday, October 30, 4:00 pm — 5:45 pm (Room 1A12)
Live Sound Seminar: LS4 - Theatrical Microphone Dressing
Chair:Mary McGregor, Freelance, Local 1 - New York, NY, USA
Panelists:
John Cooper, Local 1 IATSE - New York, NY. USA; Les Miserables Sound Dept
Anna-Lee Craig, IATSE Member - New York City, NY, USA
Abstract:
Fitting actors with wireless microphone elements and transmitters is an art form. From ensuring the actor is comfortable and the electronics are safe and secure, to getting the proper sound with minimal detrimental audio artifacts, all while maintaining the visual illusion. Two of the most widely recognized artisans in this field provide hands on demonstrations of basic technique along with some time tested “tricks of the trade.”
Friday, October 30, 4:30 pm — 6:30 pm (Room 1A06)
Networked Audio: N6 - Network Performance Requirements for Audio Applications
Chair:Jim Meyer, Clair Global - Lititz, PA, USA
Panelists:
Gene Gerhiser, National Public Radio - Washington, DC, USA
Kevin Gross, AVA Networks - Boulder, CO, USA
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany
Greg Shay, The Telos Alliance - Cleveland, OH, USA
Dave Täht, Bufferbloat.net - Ft Myers, FL, USA; Karlstad University - Karlstadt, Sweden
Abstract:
Networks are now regularly used for many classes of audio applications. Some applications have higher performance requirements than others. For file-based workflow in post-production, for instance, the emphasis is on high throughput to minimize the time required to move files across the network between workstations and storage systems. For real-time applications—such as Dante, Q-LAN, and AES67—the emphasis is on latency and network stability. Older systems such as CobraNet, Ethersound, and AES50 have other specific requirements. When video and other network applications share the network with audio applications, network design considerations are potentially significantly more complex. This workshop will outline requirements for specific audio networking applications and technologies. The audience will learn about the design issues that must be considered to support these applications and technologies.
Friday, October 30, 5:30 pm — 6:45 pm (Room 1A07)
Engineering Brief: EB4 - Listening, Hearing, & Production
Chair:
Bruno Fazenda, University of Salford - Salford, Greater Manchester, UK
EB4-1 Why Do My Ears Hurt after a Show (And What Can I Do to Prevent It)—Dennis Rauschmayer, REVx Technologies/REV33 - Austin, TX, USA
In this brief we review the traditional methods of preventing ear fatigue, short-term ear damage, and long term ear damage. A new method to prevent ear fatigue, focused on performing musicians is then presented. This method, which reduces noise and distortion in the artist’s mix, is discussed. Qualitative and quantitative results from a series of trials and experiments is presented. Qualitative results from artist feedback indicate less ear fatigue, less ringing in the ears, and a better ability to have normal conversations after a performance when noise and distortion in their mix is reduced. Quantitative results are consistent with the qualitative results and show a reduction in the change in otoacoustic emissions measured for a set of musicians when noise and distortion are reduced. The result of the study suggests that there is an important new tool for musicians to use to combat ear fatigue and short term hearing loss.
Engineering Brief 213 (Download now)
EB4-2 Classical Recording with Custom Equipment in South Brazil—Marcelo Johann, UFRGS - Porto Alegre, RS, Brazil; Andrei Yefinczuk, UFRGS - Porto Alegre, Brazil; Marcio Chiaramonte, Meber Metais - Bento Gonçalves, Brazil; Hique Gomez, Instituto Marcello Sfoggia - Porto Alegre, Brasil
This paper describes the process developed by Marcello Sfoggia for recording acoustic and classical music in south of Brazil, making intensive use of custom equipment. Sfoggia spent most of his lifetime building dedicated circuits to optimize sound reproduction and recording. He took the task of registering major performances in the city of Porto Alegre, using his home-developed equipment, what became a reference process. We describe the system employed for both sound capture and mixdown. Key components of the signal flow include preamplifiers with precision op-amps, short signal paths, modified A/D/A converters and the mixing desk with pure vacuum tube circuitry. Finally, we address our current efforts to continue his activities and improve upon his system with updated circuits and techniques.
Engineering Brief 214 (Download now)
EB4-3 Techniques For Mixing Sample-Based Music—Paul "Willie Green" Womack, Willie Green Music - Brooklyn, NY, USA
Samples are a great way to add impact, vibe, and texture to a song and can often be the primary component of a new work. From a production standpoint, audio that is already mixed and mastered can add to a producer’s sonic palette. From a mixing perspective, however, these same bonuses also provide a number of challenges. Looking more closely at each of the common issues an engineer often faces with sample-based music, I will illustrate techniques that can enable an engineer to better manipulate a sample, allowing it to sit more naturally inside the mix as a whole.
Engineering Brief 215 (Download now)
EB4-4 Case Studies of Inflatable Low- and Mid-Frequency Sound Absorption Technology—Niels Adelman-Larsen, Flex Acoustics - Copenhagen, Denmark
Surveys among professional musicians and sound engineers reveal that a long reverberation time at low frequencies in halls during concerts of reinforced music is a common cause for an unacceptable sounding event. Mid- and high-frequency sound is seldom a reason for lack of clarity and definition due to a 6 times higher absorption by audience compared to low frequencies, and a higher directivity of speakers at these frequencies. Lower frequency sounds are, within the genre of popular music however, rhythmically very active and loud, and a long reverberation leads to a situation where the various notes and sounds cannot be clearly distinguished. This reverberant bass sound rumble often partially masks even the direct higher pitched sounds. A new technology of inflated, thin plastic membranes seems to solve this challenge of needed low-frequency control. It is equally suitable for multipurpose halls that need to adjust their acoustics by the push of a button and for halls and arenas that only occasionally present amplified music and need to be treated just for the event. This paper presents the authors’ research as well as the technology showing applications in dissimilarly sized venues, including before and after measurements of reverberation time versus frequency.
Engineering Brief 216 (Download now)
EB4-5 Advanced Technical Ear Training: Development of an Innovative Set of Exercises for Audio Engineers—Denis Martin, McGill University - Montreal, QC, Canada; CIRMMT - Montreal, QC, Canada; George Massenburg, Schulich School of Music, McGill University - Montreal, Quebec, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, Quebec, Canada
There are currently many automated software solutions to tackle the issue of timbral/EQ training for audio engineers but only limited offerings for developing other skills needed in the production process. We have developed and implemented a set of matching exercises in Pro Tools that fill this need. Presented with a reference track, users are trained in matching inter-instrument levels/gain, lead instrument volume automation, instrument spatial positioning/panning, reverberation level, and compression settings on a lead element within a full mix. The goal of these exercises is to refine the listener’s degree of perception along these production parameters and to train the listener to associate these perceived variations to objective parameters they can control. We also discuss possible future directions for exercises.
Engineering Brief 217 (Download now)
Saturday, October 31, 9:00 am — 10:45 am (Room 1A12)
Live Sound Seminar: LS5 - Wireless Matters, Part I: Theory and Practical Applications
Chair:James Stoffo, Radio Active Designs - Key West, FL, USA
Abstract:
The average production today is utilizing more and more channels of wireless microphones, in-ear monitors, intercom, and interruptible foldback (IFB). This panel will discuss how to outline the strategy, tactics, and practices of implementing multiple wireless systems in RF challenging environments, from the first phone call for the production through the event itself.
The session will be immediately followed by a session on the upcoming spectrum changes.
Saturday, October 31, 10:30 am — 12:00 pm (Stage PSE)
Project Studio Expo: The Project Studio in the Commercial World
Moderator:John Storyk, Architect, Studio Designer and Principal, Walters-Storyk Design Group - Highland, NY, USA
Panelists:
Christian Cooley, Bamyasi Studios - Miami, FL, USA
Sergio Molho, WSDG - Walters Storyk Design Group - Highland, NY, USA; Miami, FL, USA
Alex Santilli, Spice House Sound - Philadelphia, PA, USA
David Shinn, National Audio Theatre Festivals - New York, NY
Carl Tatz, Carl Tatz Design - Nashville, TN, USA
Abstract:
Most people think of project studios as a ‘personal workplace’, with little or no regard for how these environments and systems will deal with real world commercial pressures. Over the years the Project Studio has come to mean many things to the studio world, often defined by terms such as budget, size, commercial status, personality, residential setting, etc. In fact, the lines of recording studio status that divide the commercial and non-commercial studio world; the lines that differentiate between large and small and that define residential and non-residential environments have been blurred for quite some time. This panel will explore this new frontier by presenting four “Project Studios” – each of which vary dramatically in size budget, acoustic solution and purpose. The panelists (either owners or designers) will describe the studio’s individual goals, strengths, design / installation tips and significant issues encountered during the design/construction process. Most importantly, they will also reveal the tale of the studio’s success or failure after opening.
AES Members can watch a video of this session for free.
Saturday, October 31, 10:45 am — 12:00 pm (Room 1A12)
Live Sound Seminar: LS6 - Wireless Matters, Part II: The Coming Spectrum Changes and the New Paradigm
Chair:James Stoffo, Radio Active Designs - Key West, FL, USA
Abstract:
The 600 MHz spectrum auction is tentatively scheduled for first quarter 2016. This session will discuss how much spectrum is likely to be taken away and where, how the new FCC rules are shaping up, and what the future holds for new spectrum bands for wireless microphone operations and new operational practices.
Saturday, October 31, 11:00 am — 11:45 am (Stage LSE)
Live Sound Expo: Speech Intelligibility: Contributing Factors
Presenter:Dan Palmer, L-Acoustics Inc. - Oxnard, CA USA
Abstract:
The cliché installed sound system is hampered by poor reproduction fidelity and reflected sound—hardly desirable when the message is delivered by spoken word, be it a sermon, a reading, an informational announcement or an evacuation warning. Through cases studies of problems solved, this session will demonstrate how systems can live in harmony with their environment.
AES Members can watch a video of this session for free.
Saturday, October 31, 11:45 am — 12:45 pm (Room 1A14)
Game Audio: G7 - The Audio Implementation Arms Race: Implementation as a Weapon
Presenter:Sally-anne Kellaway, Firelight Technologies - Melbourne, VIC, Australia
Abstract:
Overview of some of the engineering problems and challenges in developing a 3D audio solution suitable for widespread use in virtual reality and ordinary gaming and some of the impacts on mixing workflows.
In the realm of game audio, the race to make interactive and adaptive audio implementation accessible to the entire game
development team is on. Developers of game engines and audio middleware solutions are making their products more accessible to wider markets by including more tools and UI improvements for absolute beginners and audio professionals alike. Sally Kellaway will discuss the current challenges this movement poses to game audio professionals, and, using FMOD Studio as a lens, illustrate the value in extending a sound design skill base to take command of this element of the audio pipeline. Sound implementation will be explored for the potential it holds, focussing on the value of new tools and workflows that are on offer to sound designers and project teams. This discussion will enable sound designers to argue the value of upgrading and taking ownership of the audio pipeline to include advanced implementation tools.
Saturday, October 31, 12:00 pm — 12:45 pm (Stage LSE)
Live Sound Expo: Modern Digital Mixing Console Fundamentals: A Practical and Ergonomic Approach
Presenters:Stephen Bailey, Waves - Atlanta, GA, USA
Richard Ferriday, Cadac consoles - UK
Matt Larson, DiGiCo- Group One Limited National Sales Manager - Farmingdale, NY
Marc Lopez
Robert Scovill, Avid Technologies - Scottsdale, AZ, USA; Eldon's Boy Productions Inc.
Abstract:
Instant recall of settings and configurations, consistent and predictable performance, increased performance and options in smaller footprints, affordability: for all those benefits and more, digital consoles are dominating in installed sound. While the ergonomics are designed to somewhat emulate analog console signal flow, it doesn’t take long for the commonalities of the paradigms to diverge. Today’s digital consoles offer refinements in operation networking, along with a broad array of processing through sophisticated plug-in environments. This session offers a ground up, logical approach to digital mixing.
AES Members can watch a video of this session for free.
Saturday, October 31, 1:00 pm — 1:45 pm (Stage LSE)
Live Sound Expo: Mono vs Stereo vs LCR in HOW and Fixed-Install
Presenter:Jeff Taylor, VUE AUDIOTECHNIK - Escondido, CA, USA
Abstract:
Architectural issues, acoustic concerns, audience point-of-view, style of music—all these elements come to play in a decision as to whether to configure a fixed installation system in mono, to attempt stereo, or combine the two in an LCR configuration. We examine the practical considerations in making a decision and in mixing for the chosen configuration.
AES Members can watch a video of this session for free.
Saturday, October 31, 1:30 pm — 2:30 pm (Room 1A13)
Networked Audio: N7 - Benefits of AES67 to the End User
Presenter:Rich Zwiebel, QSC - Boulder, CO, USA; K2
Abstract:
Presenter Rich Zwiebel has a long history in audio networking. He was a founder of Peak Audio, the company that developed CobraNet, the first widely used audio network for Professional applications. As a VP at QSC he continues to be very active in the field and is currently the Chairman of the Media Networking Alliance.
This presentation reviews the history of professional audio networking, where we are today, and what the future may hold. A clear explanation of what AES67 is, as well as what it is not, along with how it will benefit those who choose to use it will be included. Attendees will understand it’s relationship to existing audio network technologies in the market.
Additionally, an explanation of who the Media Networking
Alliance is, who it’s members are, and what it’s goals are will be presented.
A discussion of the advantages of a single facility network will close out the session.
Saturday, October 31, 2:00 pm — 2:45 pm (Stage LSE)
Live Sound Expo: IEM Fundamentals and Hearing Conservation
Presenter:Mark Frink, Program Coordinator/Stage Manager - Jacksonville, Florida, USA; Independent Engineer and Tech Writer - IATSE 115
Abstract:
Drawing on his decades of road experience, our LSE host, Mark Frink, explains the logic behind moving live performers to personal In-Ear-Monitoring solutions. Topics will include the selection of IEMs (universal vs. custom), mixing monitors for IEMs, personal mixing by performers and protecting performers hearing.
AES Members can watch a video of this session for free.
Saturday, October 31, 2:15 pm — 3:45 pm (Room 1A07)
Paper Session: P16 - Room Acoustics
Chair:
Rémi Audfray, Dolby Laboratories, Inc. - San Francisco, CA, USA
P16-1 Environments for Evaluation: The Development of Two New Rooms for Subjective Evaluation—Elisabeth McMullin, Samsung Research America - Valencia, CA USA; Adrian Celestinos, Samsung Research America - Valencia, CA, USA; Allan Devantier, Samsung Research America - Valencia, CA, USA
An overview of the optimization, features, and design of two new critical listening rooms developed for subjective evaluation of a wide-array of audio products. Features include a rotating wall for comparing flat-panel televisions, an all-digital audio switching system, custom tablet-based testing software for running a variety of listening experiments, and modular acoustic paneling for customizing room acoustics. Using simulations and acoustic measurements, a study of each of the rooms was performed to analyze the acoustics and optimize the listening environment for different listening situations.
Convention Paper 9460 (Purchase now)
P16-2 Low Frequency Behavior of Small Rooms—Renato Cipriano, Walters Storyk Design Group - Belo Horizonte, Brazil; Robi Hersberger, Walters Storyk Design Group - New York, USA; Gabriel Hauser, Walters Storyk Design Group - Basel, Switzerland; Dirk Noy, WSDG - Basel, Switzerland; John Storyk, Architect, Studio Designer and Principal, Walters-Storyk Design Group - Highland, NY, USA
Modeling of sound reinforcement systems and room acoustics in large- and medium-size venues has become a standard in the audio industry. However, acoustic modeling of small rooms has not yet evolved into a widely accepted concept, mainly because of the unavailable tool set. This work introduces a practical and accurate software-based approach for simulating the acoustic properties of studio rooms based on BEM. A detailed case study is presented and modeling results are compared with measurements. It is shown that results match within given uncertainties. Also, it is indicated how the simulation software can be enhanced to optimize loudspeaker locations, room geometry, and place absorbers in order to improve the acoustic quality of the space and thus the listening experience.
Convention Paper 9461 (Purchase now)
P16-3 Measuring Sound Field Diffusion: SFDC—Alejandro Bidondo, Universidad Nacional de Tres de Febrero - UNTREF - Caseros, Buenos Aires, Argentina; Mariano Arouxet, Universidad Nacional de Tres de Febrero - Buenos Aires, Argentina; Sergio Vazquez, Universidad Nacional de Tres de Febrero - Buenos Aires, Argentina; Javier Vazquez, Universidad Nacional de Tres de Febrero - Buenos Aires, Argentina; Germán Heinze, Universidad Nacional de Tres de Febrero - Buenos Aires, Argentina
This research addresses the usefulness of an absolute descriptor to quantify the degree of diffusion in a third octave band basis of a sound field. The degree of sound field diffuseness in one point is related with the reflection’s energy control multiplied by the temporal distribution uniformity of reflections. All this information is extracted from a monaural, broadband, omnidirectional, high S/N impulse response. The coefficient range varies between 0 and 1, evaluates the early, late, and total sound field for frequencies above Schroeder’s and in the far field from diffusive surfaces, zero being “no diffuseness” at all. This coefficient allows the comparison of different rooms, different places inside rooms, measurement of the effects of different sound diffusers coatings, and the resulting spatial uniformity variation, among other applications.
Convention Paper 9462 (Purchase now)
Saturday, October 31, 3:00 pm — 3:45 pm (Stage LSE)
Live Sound Expo: The Future of Wireless: Now What?
Presenters:Mark Brunner, Shure Incorporated - Niles, IL USA
Joe Ciaudelli, Sennheiser - Old Lyme, CT USA
Howard Kaufman, Lectrosonics, Inc. - Seaford, NY, USA
Abstract:
There has been dramatic erosion in the television band spaces available for wireless microphone and monitor use. How can a facility find available bandwidth and stay legal? What can be done to future-proof a system? Do 2.4 Ghz and like systems offer a solution, and if so, for whom? What can digital wireless bring to the equation? All these questions and more will be addressed.
AES Members can watch a video of this session for free.
Saturday, October 31, 3:15 pm — 4:45 pm (Room 1A14)
Networked Audio: N8 - How to Get AES67 into Your Systems/Products
Chair:Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany
Panelists:
Michael Dosch, Lawo AG - Rastatt Germany
Nathan Phillips, Coveloz Technologies - Ottawa, ON, Canada
Greg Shay, The Telos Alliance - Cleveland, OH, USA
Nicolas Sturmel, Digigram S.A. - Montbonnot, France
Arie van den Broek, Archwave Technologies - Schwerzenbach, Switzerland
Kieran Walsh, Audinate Pty. Ltd. - Ultimo, NSW, Australia
Abstract:
This workshop introduces several options to implement AES67 networking capabilities into existing or newly designed products. The session starts with a quick recap on the technical ingredients of AES67 and points out the principal options on implementing AES67 into new or existing products. After providing an overview on commercially available building blocks (modules, software libraries and reference designs), the workshop commences in a discussion on the value of providing AES67 compatibility from the perspective of providers of existing AoIP networking solutions. The workshop is targeted towards product manufacturers seeking ways to implement AES67 into their products, but should also provide valuable insight to those with general technical interest in AES67.
Saturday, October 31, 3:45 pm — 4:15 pm (Room 1A07)
Engineering Brief: EB5 - Acoustics
Chair:
Jung Wook (Jonathan) Hong, McGill University - Montreal, QC, Canada; GKL Audio Inc. - Montreal, QC, Canada
EB5-1 Visualization of Compact Microphone Array Room Impulse Responses—Luca Remaggi, University of Surrey - Guildford, Surrey, UK; Philip Jackson, University of Surrey - Guildford, Surrey, UK; Philip Coleman, University of Surrey - Guildford, Surrey, UK; Jon Francombe, University of Surrey - Guildford, Surrey, UK
For many audio applications, availability of recorded multichannel room impulse responses (MC-RIRs) is fundamental. They enable development and testing of acoustic systems for reflective rooms. We present multiple MC-RIR datasets recorded in diverse rooms, using up to 60 loudspeaker positions and various uniform compact microphone arrays. These datasets complement existing RIR libraries and have dense spatial sampling of a listening position. To reveal the encapsulated spatial information, several state of the art room visualization methods are presented. Results confirm the measurement fidelity and graphically depict the geometry of the recorded rooms. Further investigation of these recordings and visualization methods will facilitate object-based RIR encoding, integration of audio with other forms of spatial information, and meaningful extrapolation and manipulation of recorded compact microphone array RIRs.
Engineering Brief 218 (Download now)
EB5-2 Sensible 21st Century Saxophone Selection —Thomas Mitchell, University of Miami - Coral Gables, FL, USA
This paper presents a method for selecting a saxophone, using data mining techniques with both subjective and objective data as criteria. Immediate, subjective personal impressions are given equal weight with more-objective observations made after the fact, and with hard data distilled from audio data using MIR Toolbox. Offshoots and directions for future research are considered.
Engineering Brief 219 (Download now)
Saturday, October 31, 3:45 pm — 6:45 pm (Off-Site 2)
Technical Tour: TT8 - Avery Fisher Hall [now David Geffen Hall]
Abstract:
Avery Fisher Hall is a concert hall in New York City's Lincoln Center for the Performing Arts complex on Manhattan's Upper West Side. The 2,738 seat auditorium opened in 1962 and is the home of the New York Philharmonic. Its recording facilities, run by Larry Rock, have Avery Fisher Hall being used for many events, both musical and non-musical. As part of its Great Performers series, Lincoln Center presents visiting orchestras in Avery Fisher Hall. The PBS series "Live from Lincoln Center" also features performances from the Hall.
Location: Avery Fischer Hal
10 Lincoln Center Plaza, New York, NY
Bus transportation is not provided. We'd suggest taking the 7 Train to the 1 Train to 66th St.
Limited to 30 people. A ticket is required - anyone showing up without a ticket will be turned away.
Saturday, October 31, 4:00 pm — 4:45 pm (Stage LSE)
Live Sound Expo: Miking Grand Piano and Choirs
Presenters:Daryl Bornstein, Daryl Bornstein Audio - North Salem, NY, USA
Mark Frink, Program Coordinator/Stage Manager - Jacksonville, Florida, USA; Independent Engineer and Tech Writer - IATSE 115
Jeremiah Slovarp, Jereco Studios, Inc. - Bozeman, MT, USA; Montana State University
Abstract:
In Houses Of Worship, regardless of worship styles, acoustic grand piano and choirs are the most consistent sound sources to have fixed mics employed for sound reinforcement. This session covers the selection of mics, placement and tips for keeping a set-up consistent.
AES Members can watch a video of this session for free.
Saturday, October 31, 4:15 pm — 6:00 pm (Room 1A12)
Live Sound Seminar: LS7 - Sound Design for Theater: Practical and Artistic Considerations
Chair:Nevin Steinberg, Nevin Steinberg Sound Design - New York, NY, USA
Abstract:
Whether a musical or straight play various time tested elements, as well as emerging technologies, are crucial to a successful theatrical sound design, and those elements are as much artistic and visceral as they are technical. One of Broadway’s leading sound designers will discuss many of the considerations and practices of the design process from beginning to completion.
Saturday, October 31, 4:30 pm — 5:45 pm (Stage PSE)
Project Studio Expo: Audio as a Business: Building and Developing a Career
Panelists:John Kiehl, Manhattan Producers Alliance - New York, NY, USA; Soundtrack Studios
Jerome Rossen, Freshmade Music - San Francisco, CA, USA; Manhattan Producers Alliance
Mike Sayre, Independent Film Composer - New York, NY, USA; Manhattan Producers Alliance
Carl Tatz, Carl Tatz Design - Nashville, TN, USA
Brian Walker, Audio Director, Leap Frog - San Francisco, CA; Manhattan Producers Alliance
Richard Warp, Manhattan Producers Alliance - San Francisco, CA; Leapfrog Enterprises Inc - Emeryville, CA, USA
Abstract:
Irons in the Fire: Career and Business Development Mentoring with the Manhattan Producers Alliance
Bring your energy, enthusiasm, business ideas, and questions. At this event the focus is on YOU! ?Succeeding in music today is, more than ever, challenging. Members of the Manhattan Producers Alliance will give a brief talk about developing your brand and your business and functioning as a creative talent in an ever-changing music business. Take this unique opportunity to meet some ManhatPro members and spend some time learning some tips and tricks for business development. You’ll participate in our open discussions, discuss your personal career goals one on one, and get a chance to meet some ManhatPro members.
AES Members can watch a video of this session for free.
Saturday, October 31, 4:30 pm — 6:00 pm (Room 1A13)
Product Development: PD10 - Optimizing the Powered Loudspeaker System
Presenter:Scott Leslie, Ashly Audio - Webster, NY, USA; PD Squared - Irvine, CA USA
Abstract:
Historically amplifiers and loudspeakers have been interfaced using a simplified interface of 4/8 Ohm nominal speakers impedance. With a general market trend towards self-powered speakers, greater optimization in the interface between speaker and amplifier becomes possible. This tutorial aligns to the product design track, theme 2 and will provide a forum for discussing of required amplifier performance for self-powered speakers as well as optimization techniques between the amplifier section and speaker drivers and provide better understanding of the complex interfaces of the signal processing, power amplification and the acoustic domain in a self-powered speaker in order for speaker designers to optimize self-powered speaker designs and achieve higher SPL levels at lower cost.
Saturday, October 31, 5:00 pm — 6:30 pm (Room 1A14)
Networked Audio: N9 - How Will AES67 Affect the Industry?
Chair:Rich Zwiebel, QSC - Boulder, CO, USA; K2
Panelists:
Claude Cellier, Merging Technologies - Puidoux, Switzerland
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany
Patrick Killianey, Yamaha Professional Audio - Buena Park, CA, USA
Phil Wagner, Focusrite Novation Inc. - El Segundo, CA
Ethan Wetzell, Bosch Communications Systems - Burnsville, MN USA; OCA Alliance
Abstract:
There are many audio networking standards available today. Unfortunately, equipment designers and facility engineers have been forced to choose between them to adopt a single platform for an entire operation, or link disparate network pools by traditional cabling (analog, AES/EBU or MADI). AES67 solves this dilemma, providing a common interchange format for various network platforms to exchange audio without sacrificing proprietary advantages. Published in 2013, manufacturers are already showing products with AES67 connectivity this year. Join our panel of six industry experts for an open discussion on how AES67 will impact our industry.
Sunday, November 1, 9:00 am — 10:45 am (Room 1A12)
Live Sound Seminar: LS8 - Sound Design Meets Reality
Chair:Andrew Keister, AKD - New York, NY, USA
Abstract:
The best intentions of the sound designer don’t always fit in with the venue’s design or infrastructure, other departments’ needs, or other changes as a production is loaded in and set up for the first time. How the designer’s designated representative on site addresses these issues is critical to keeping the overall vision of the sound design and production aesthetics intact while keeping an eye on the budget and schedule.
Sunday, November 1, 9:00 am — 10:30 am (Room 1A13)
Product Development: PD11 - Loudspeaker Measurements
Presenter:Charles Hughes, Excelsior Audio - Gastonia, NC, USA; AFMG - Berlin, Germany
Abstract:
This tutorial session will cover best practices for loudspeaker measurements. It is critical for product development and component selection to know the response of loudspeaker systems and components with reasonable accuracy in order to make informed decisions based on comparisons of data. In this session we will briefly cover the basics of FFT-based measurement systems before moving on to additional topics: (1) Averaging and S/N; (2) Windowing (both signal acquisition and impulse response windowing); (3) Ground plane measurement techniques; (4) Directivity measurements; (5) Maximum input voltage measurements; (6) Impedance; (7) Alignment of pass bands.
Sunday, November 1, 10:00 am — 11:00 am (Room 1A07)
Paper Session: P21 - Applications in Audio
Chair:
Jason Corey, University of Michigan - Ann Arbor, MI, USA
P21-1 Loudness: A Function of Peak, RMS, and Mean Values of a Sound Signal—Hoda Nasereddin, IRIB University - Tehran, Iran; Ayoub Banoushi, IRIB University - Tehran, Iran
Every sound has a loudness recognized by hearing mechanism. Although loudness is a sensation measure, it is a function of sound signal properties. However, the function is not completely clear. In this paper we show that loudness determination as a function of effective mean square (RMS), peak, and average values of a sound signal is possible with an artificial neural network (ANN). We did not access to experimental data, so we produced required data using ITU-R BS.1770 model to train the network. The results show that the loudness can be simply estimated using sound signal physical features and without referring to complex hearing mechanism.
Convention Paper 9473 (Purchase now)
P21-2 Robust Audio Fingerprinting for Multimedia Recognition Applications—Sangmoon Lee, Samsung Electronics Co. Ltd. - Suwon, Gyeonggi-do, Korea; Inwoo Hwang, Samsung Electronics Co. Ltd. - Suwon-si, Gyeonggi-do, Korea; Byeong-Seob Ko, Samsung Electronics Co. Ltd. - Suwon, Korea; Kibeom Kim, Samsung Electronics Co. Ltd. - Suwon, Gyeonggi-do, Korea; Anant Baijal, Samsung Electronics Co. Ltd. - Suwon, Korea; Youngtae Kim, Samsung Electronics Co. Ltd. - Suwon, Gyeonggi-do, Korea
For a reliable audio fingerprinting (AFP) system for multimedia service, it is essential to make fingerprints robust to the time mismatch between live audio stream and prior recordings, as well as they should be sensitive to changes in contents for accurate discrimination. This paper presents a new AFP method using line spectral frequencies (LSFs), which are a kind of parameter that capture the underlying spectral shape: the proposed AFP method includes a new systematic scheme for the robust and discriminative fingerprint generation based on the inter-frame LSF difference and an efficient matching algorithm using the frame concentration measure based on the frame continuity property. The tests on databases containing a variety of advertisements are carried out to compare the performances of Phillips Robust Hash (PRH) and the proposed AFP. The test results demonstrate that the proposed AFP can maintain its true matched rate at over 98% even when the overlap ratio is as low as 87.5%. It can be concluded that the proposed AFP algorithm is more robust to time mismatch conditions when compared to PRH method.
Convention Paper 9475 (Purchase now)
Sunday, November 1, 10:15 am — 1:30 pm (Off-Site 1)
Technical Tour: TT10 - NBC Universal
Abstract:
The tour will include a visit to Studio 6B, home of The Tonight Show starring Jimmy Fallon, and move on to Studio 8G, home of Late Night with Seth Meyer. We will review the audio technology used to produce both shows.
Location: NBC Universal
30 Rockefeller Pl, New York, NY
Bus transportation is not provided. We'd suggest taking the 7 Train to Times Square and then walking up to Rockefeller Center.
Limited to 40 people. A ticket is required - anyone showing up without a ticket will be turned away.
Sunday, November 1, 11:00 am — 4:00 pm (Stage PSE)
Project Studio Expo: Mic to Monitor
Abstract:
So, you care about your sound, but don't know why you can't quite get the results you strive for!
Learn from our panel of experts from acoustics, high-end audio product design, music recording, and production. Supercharge your music or recording career!
Attend the Prism Sound Mic to Monitor seminars at AES New York's Project Studio Expo and discover tips, techniques, ideas and solutions you can start using right away! The Mic to Monitor seminars will cover practical aspects of room treatment, loudspeaker placement, loudspeaker technology, microphone technology and microphone selection and positioning, A/D & D/A converters, clocking strategies and some fascinating insights into psycho-acoustics.
The Mic to Monitor Seminar day always ends with a VIP guest speaker. You'll be treated to a talk about their career and professional approach with some exciting playback examples from recent projects. Recent presenters have worked with such luminaries as Paul McCartney, Mary J Blige, Van Morrison, AC/DC and Jay-Z to name but a few.
Make the journey from Mic to Monitor and open your ears to a whole new way of creating your own hit sound!
We hope to see you at AES!
11:00 am – 11:40 am: Mastering and Recording with High Performance Analog Electronics
11:40 am – 12:20 pm: Converter Technology and Clocking Issues
12:20 pm – 1:00 pm: Practical Room Acoustics and Treatment
1:00 pm – 1:40 pm: Loudspeaker Technology and Setup
1:40 pm – 2:20 pm: Software and DSP/Plug-In Technology
2:20 pm – 3:00 pm: Microphone Technology and Usage
3:00 pm – 3:40 pm: VIP Guest Speaker on Career Success and Their Secret Sauce!
3:40 pm – 4:00 pm: Q&A
Sunday, November 1, 11:00 am — 11:45 am (Stage LSE)
Live Sound Expo: Virtual Sound Checks and Processing in a Networked Environment
Presenters:Peter Keppler
Kevin Madigan, Independent - Venice, CA, USA
Robert Scovill, Avid Technologies - Scottsdale, AZ, USA; Eldon's Boy Productions Inc.
Taidus Vallandi, DiGiCo - Las Vegas, NV, USA; Group One Ltd. - Farmingdale, NY, USA
Abstract:
Digital consoles and digital networking offer a natural pathway to simple recording through a single connection, making virtual sound checks an equally simple tool. Further, network appliances are now offering universally applicable virtual effects racks with benefits in pre-production, in enhanced portability, in migrating a studio sound to the stage (including providing recording engineers familiar tools at FOH) and in producing enhance monitor mixes. This session examines the fundamentals of effectively deploying such tools.
AES Members can watch a video of this session for free.
Sunday, November 1, 11:00 am — 12:30 pm (Room 1A12)
Networked Audio: N10 - AES67 Interoperability Testing
Chair:Kevin Gross, AVA Networks - Boulder, CO, USA
Panelists:
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany
Peter Stevens, BBC Research & Development - London, UK
Nicolas Sturmel, Digigram S.A. - Montbonnot, France
Abstract:
The AES has now planned two plugfests for AES67 implementers and users. The first plugfest was held in October 2014 at IRT in Munich. A report describing this event was produced and published by the AESSC as AES-R12-2014. The second plugfest is planned for early November in Washington D.C. at NPR headquarters. This workshop will summarize the testing performed and will present results. A panel comprising plugfest participants will answer audience questions and the audience should get a good feel for where AES67 implementation stands.
Sunday, November 1, 12:00 pm — 12:45 pm (Stage LSE)
Live Sound Expo: Shed and Arena Loudspeaker Optimization: Pulling Big Shows Together
Presenter:Bernie Broderick, Eastern Acoustic Works (EAW) - Whitinsville, MA, USA
Abstract:
Beginning with off-line prep and carrying on through the loudspeaker hang and on to sound check, this end-user focused session uses a case study approach to walk through the steps of configuring and optimizing a rig for large audiences in amphitheaters and arenas.
AES Members can watch a video of this session for free.
Sunday, November 1, 1:00 pm — 1:45 pm (Stage LSE)
Live Sound Expo: Choosing the Right Vocal Mic
Presenters:Mark Frink, Program Coordinator/Stage Manager - Jacksonville, Florida, USA; Independent Engineer and Tech Writer - IATSE 115
Peter Keppler
Kevin Madigan, Independent - Venice, CA, USA
Abstract:
While there are tried and true mics clinched by singers across the world, selecting the best mic for a vocalist involves more than snatching the most familiar mic off the shelf. This session talks microphone fundamentals (including polar-patterns and capsule construction), matching performance with a given voice and singing style, as well as tips for working with vocalists.
AES Members can watch a video of this session for free.
Sunday, November 1, 1:00 pm — 2:45 pm (Room 1A12)
Live Sound Seminar: LS9 - Live Sound Design for TV
Chair:Duncan Edwards
Panelists:
Mac Kerr
Matt Kraus
Simon Matthews
Abstract:
House sound reinforcement for live broadcast has its own set of unique requirements where one of the primary goals is that it must not interfere with the audio for broadcast. Duncan Edwards is the in-studio sound consultant for NBC and along with his staff will discuss the primary considerations, subtleties and design for presenting live performances to a television studio audience.
Sunday, November 1, 2:00 pm — 2:45 pm (Stage LSE)
Live Sound Expo: RF Coordination on the Road
Presenter:Ike Zimbel, Zimbel Audio Productions - Toronto, Canada
Abstract:
Get a look into the working life of a touring RF engineer. Our guest engineer, just off a five-month road haul, compares the RF environments in North American arenas, shares a practical approach to working with wireless microphones, instruments and monitors in those environments and discusses wireless best practices.
AES Members can watch a video of this session for free.
Sunday, November 1, 2:00 pm — 4:00 pm (Room 1A08)
Paper Session: P22 - Sound Reinforcement
Chair:
Peter Mapp, Peter Mapp Associates - Colchester, Essex, UK
P22-1 From Studio to Stage—Guillaume Le Hénaff, Conservatory of Paris - Paris, France
To convert studio produced music into a live concert is a key issue for a lot of artists. Studio work is often a long-term undertaking during which everything is subject to attentive decisions, e.g., instruments, performers, recording venues, microphones. When performing songs from a record in concert, all these decisions have to be reviewed or at least questioned. Indeed, studio and stage are two really different production contexts and differ on so many points that artists often change their arrangements, line-up or even the form of their songs. However, live sound engineers may be expected to reproduce the sound quality and aesthetics of the record. In this paper we propose solutions regarding the switchover from studio to stage to provide artists and engineers with useful tools when designing the sound of a studio album-inspired live show. Specifically, we explain why and how performing music is different in concert than in studio, we detail types of microphones that are suited to both recording and sound reinforcement applications and we take an inventory of miking tricks and mixing techniques like Virtual Soundcheck that offer a studio workflow to Front of House engineers.
Convention Paper 9476 (Purchase now)
P22-2 Some Effects of Speech Signal Characteristics on PA System Performance and Design—Peter Mapp, Peter Mapp Associates - Colchester, Essex, UK
Although the characteristics of speech signals have been extensively studied for more than 90 years, going back to the work of Harvey Fletcher and Bell Labs pioneering research, the characteristics of speech are not as well understood by the PA and sound reinforcement industries as they perhaps should be. Significant differences occur in both the literature and between international standards concerning such basic parameters as speech spectra and level. The paper reviews the primary characteristics of speech of relevance to sound systems design and shows how differences within the data or misapplication of it can lead to impairment of system performance and potential loss of intelligibility. The implications for compliance with various National and International Life Safety standards are discussed.
Convention Paper 9477 (Purchase now)
P22-3 Directivity-Customizable Loudspeaker Arrays Using Constant-Beamwidth Transducer (CBT) Overlapped Shading—Xuelei Feng, Nanjing University - Nanjing, Jiangsu, China; Yong Shen, Nanjing University - Nanjing, Jiangsu Province, China; D.B. (Don) Keele, Jr., DBK Associates and Labs - Bloomington, IN, USA; Jie Xia, Nanjing University - Nanjing, China
In this work a multiple constant-beamwidth transducer (Multi-CBT) loudspeaker array is proposed that is constructed by applying multiple overlapping CBT Legendre shadings to a circular-arc or straight-line delay-curved multi-acoustic-source array. Because it has been proved theoretically and experimentally that the CBT array provides constant broadband directivity behavior with nearly no side lobes, the Multi-CBT array can provide a directivity-customizable sound field with frequency-independent element weights by sampling and reconstructing the targeted directivity pattern. Various circularly curved Multi-CBT arrays and straight-line, delay-curved Multi-CBT arrays are analyzed in several application examples that are based on providing constant Sound Pressure Level (SPL) on a seating plane, and their performance capabilities are verified. The power of the method lies in the fact only a few easily-adjustable real-valued element weights completely control the shape of the polar pattern that makes matching the polar shape to a specific seating plane very easy. The results indicate that the desired directivity patterns can indeed be achieved.
Convention Paper 9478 (Purchase now)
P22-4 A Novel Approach to Large-Scale Sound Reinforcement Systems—Mario Di Cola, Audio Labs Systems - Casoli (CH), Italy; Alessandro Tatini, K-Array S.r.l. - Florence, Italy
An innovative approach to vertical array technology in large-scale sound reinforcement is presented. The innovation introduced consists in mechanical arrangement of the array as well as DSP processing for computer assisted coverage optimization. Beyond these innovations, a different form factor of the vertical array elements and the unusual acoustic principle of dipole are also involved as well as an alternative mechanical aiming method. The paper presents a synthesis of this innovative concept supported by detailed descriptions, test measurement results and proven results from real world applications that have been done.
Convention Paper 9479 (Purchase now)
Sunday, November 1, 2:45 pm — 4:30 pm (Room 1A12)
Live Sound Seminar: LS10 - Loudspeaker Developments and Their Impact on the Industry
Chair:Dave Rat, Rat Sound Systems - Camarillo, CA, USA
Panelists:
David Gunness, Fulcrum Acoustic - Sutton, MA, USA
Ralph Heinz, Renkus-Heinz, Inc. - Foothill Ranch, CA, USA
Dave Natale, Audio Resource Group, Inc. - Lancaster, PA, USA
Abstract:
Three Daves and a Ralph lend their experience to the discussion. Participants, Dave Rat, owner of Rat Sound and FOH mixer for the Red Hot Chile Peppers, Dave Gunness, speaker designer formerly with EV and EAW now partner in Fulcrum Acoustics, Dave Natate, FOH mixer for The Rolling Stones, Ralph Heinz speaker designer at Renkus-Heinz.