AES Show: Make the Right Connections Audio Engineering Society

AES San Francisco 2008
Acoustics Event Details

Wednesday, October 1, 9:00 am — 4:00 pm

Live Sound Symposium: Surround Live VI—Acquiring the Surround Field

The Lodge Ballroom, Regency Center 1290 Sutter St. San Francisco, CA Tel. +1 415 673 5716

Abstract:
Building from the five previous, highly successful Surround Live symposia, Surround Live Six, will once again explore in detail, the world of Live Surround Audio.

Frederick Ampel, President of consultancy Technology Visions, in cooperation with the Audio Engineering Society, brings this years event back to San Francisco for the third time. The event will feature a wide range of professionals from both the televised Sports arena, Public Radio, and the digital processing and encoding sciences.

Surround Live Six Platinum Sponsors are: Neural Audio and Sennheiser/K&H. Surround Live Six Gold Sponsor is: Ti-Max/Outboard Electronics

8:15am - 9:00 am – Coffee, Registration, Continental Breakfast
9:00 am – Keynote #1 – Kurt Graffy
9:40 am – Keynote #2 – J. Johnston
10:15 am - 10:25 am – Coffee Break
10:30 am - 12:30 pm – Presenters 1, 2, & 3 plus Live Demonstrations and Demo Video Clips with Surround Audio
12:30 pm - 1:00 pm – Lunch (provided for Ticketed Participants)
1:00 pm - 3:00 pm – Presenters 4, 5, & 6 with Live Demonstrations and Clips
3:00 pm - 3:15 pm – Break
3:15 pm - 4:30 pm – Panel Discussion and Interactive Q&A
4:30 pm - 5:00 pm – Organ Concert (Pending availability of Organist) featuring the 1909 Austin Pipe Organ

Scheduled to appear are:
•Fred Aldous - FOX Sports Audio Consultant / Sr. Mixer
•Tom Sahara - Sr. Director of Remote Operations & IT Turner Sports
•Mike Pappas – KUVO Radio – Denver
•Kurt Graffy – ARUP Acoustics – San Francisco –Co-Keynote
•James D. (JJ) Johnston - Chief Scientist, Neural Audio, Kirkland, WA.
•Jim Hilson – Dolby Laboratories – San Francisco, CA.
•Other possible presenters include Speed Network, NFL Films, and NPR.

The day’s events will include formal presentations, special demonstration materials in full surround, and interactive discussions with presenters. Seating is limited, and previous events have sold out quickly. Register quickly to insure you will be able to attend.

Further details will be added as they become available


Thursday, October 2, 9:00 am — 10:45 am

B1 - Listening Tests on Existing and New HDTV Surround Coding Systems


Chair:
Gerhard Stoll, IRT
Panelists:
Florian Camerer, ORF
Kimio Hamasaki, NHK Science & Technical Research Laboratories
Steve Lyman, Dolby Laboratories
Andrew Mason, BBC R&D
Bosse Ternström, SR

Abstract:
With the advent of HDTV services, the public is increasingly
being exposed to surround sound presentations using so-called home theater environments. However, the restricted bandwidth available into the home, whether by broadcast, or via broadband, means that there is an increasing interest in the performance of low bit rate surround sound audio coding systems for “emission” coding. The European Broadcasting Union Project Group D/MAE (Multichannel Audio Evaluations) conducted immense listening tests to asses the sound quality of multichannel audio codecs for broadcast applications in a range from 64 kbit/s to 1.5 Mbit/s. Several laboratories in Europe have contributed to this work.

This Broadcast Session will provide profound information about these tests and the results. Further information will be provided, how the professional industry, i.e. codec proponents and decoder manufacturers, is taking further steps to develop new products for multichannel sound in HDTV.


Thursday, October 2, 9:00 am — 12:30 pm

P1 - Audio Coding


Chair: Marina Bosi, Stanford University - Stanford, CA, USA

P1-1 A Parametric Instrument Codec for Very Low Bit RatesMirko Arnold, Gerald Schuller, Fraunhofer Institute for Digital Media Technology - Ilmenau, Germany
A technique for the compression of guitar signals is presented that utilizes a simple model of the guitar. The goal for the codec is to obtain acceptable quality at significantly lower bit rates compared to universal audio codecs. This instrument codec achieves its data compression by transmitting an excitation function and model parameters to the receiver instead of the waveform. The parameters are extracted from the signal using weighted least squares approximation in the frequency domain. For evaluation a listening test has been conducted and the results are presented. They show that this compression technique provides a quality level comparable to recent universal audio codecs. The application however is, at this stage, limited to very simple guitar melody lines. [This paper is being presented by Gerald Schuller.]
Convention Paper 7501 (Purchase now)

P1-2 Stereo ACC Real-Time Audio CommunicationAnibal Ferreira, University of Porto - Porto, Portugal, ATC Labs, Chatham, NJ, USA; Filipe Abreu, SEEGNAL Research - Portugal; Deepen Sinha, ATC Labs - Chatham, NJ, USA
Audio Communication Coder (ACC) is a codec that has been optimized for monophonic encoding of mixed speech/audio material while minimizing codec delay and improving intrinsic error robustness. In this paper we describe two major recent algorithmic improvements to ACC: on-the-fly bit rate switching and coding of stereo. A combination of source, parametric, and perceptual coding techniques allows a very graceful switching between different bit rates with minimal impact on the subjective quality. A real-time GUI demonstration platform is available that illustrates the ACC operation from 16 kbit/s mono till 256 kbit/s stereo. A real-time two-way stereo communication platform over Bluetooth has been implemented that illustrates the ACC operational flexibility and robustness in error-prone environments.
Convention Paper 7502 (Purchase now)

P1-3 MPEG-4 Enhanced Low Delay AAC—A New Standard for High Quality CommunicationMarkus Schnell, Markus Schmidt, Manuel Jander, Tobias Albert, Ralf Geiger, Fraunhofer IIS - Erlangen, Germany; Vesa Ruoppila, Per Ekstrand, Dolby Stockholm/Sweden, Nuremberg/Germany; Bernhard Grill, Fraunhofer IIS - Erlangen, Germany
The MPEG Audio standardization group has recently concluded the standardization process for the MPEG-4 ER Enhanced Low Delay AAC (AAC-ELD) codec. This codec is a new member of the MPEG Advanced Audio Coding family. It represents the efficient combination of the AAC Low Delay codec and the Spectral Band Replication (SBR) technique known from HE-AAC. This paper provides a complete overview of the underlying technology, presents points of operation as well as applications, and discusses MPEG verification test results.
Convention Paper 7503 (Purchase now)

P1-4 Efficient Detection of Exact Redundancies in Audio SignalsJosé R. Zapata G., Universidad Pontificia Bolivariana - Medellín, Antioquia, Colombia; Ricardo A. Garcia, Kurzweil Music Systems - Waltham, MA, USA
An efficient method to identify bitwise identical long-time redundant segments in audio signals is presented. It uses audio segmentation with simple time domain features to identify long term candidates for similar segments, and low level sample accurate metrics for the final matching. Applications in compression (lossy and lossless) of music signals (monophonic and multichannel) are discussed.
Convention Paper 7504 (Purchase now)

P1-5 An Improved Distortion Measure for Audio Coding and a Corresponding Two-Layered Trellis Approach for its OptimizationVinay Melkote, Kenneth Rose, University of California - Santa Barbara, CA, USA
The efficacy of rate-distortion optimization in audio coding is constrained by the quality of the distortion measure. The proposed approach is motivated by the observation that the Noise-to-Mask Ratio (NMR) measure, as it is widely used, is only well adapted to evaluate relative distortion of audio bands of equal width on the Bark scale. We propose a modification of the distortion measure to explicitly account for Bark bandwidth differences across audio coding bands. Substantial subjective gains are observed when this new measure is utilized instead of NMR in the Two Loop Search, for quantization and coding parameters of scalefactor bands in an AAC encoder. Comprehensive optimization of the new measure, over the entire audio file, is then performed using a two-layered trellis approach, and yields nearly artifact-free audio even at low bit-rates.
Convention Paper 7505 (Purchase now)

P1-6 Spatial Audio Scene CodingMichael M. Goodwin, Jean-Marc Jot, Creative Advanced Technology Center - Scotts Valley, CA, USA
This paper provides an overview of a framework for generalized multichannel audio processing. In this Spatial Audio Scene Coding (SASC) framework, the central idea is to represent an input audio scene in a way that is independent of any assumed or intended reproduction format. This format-agnostic parameterization enables optimal reproduction over any given playback system as well as flexible scene modification. The signal analysis and synthesis tools needed for SASC are described, including a presentation of new approaches for multichannel primary-ambient decomposition. Applications of SASC to spatial audio coding, upmix, phase-amplitude matrix decoding, multichannel format conversion, and binaural reproduction are discussed.
Convention Paper 7507 (Purchase now)

P1-7 Microphone Front-Ends for Spatial Audio CodersChristof Faller, Illusonic LLC - Lausanne, Switzerland
Spatial audio coders, such as MPEG Surround, have enabled low bit-rate and stereo backwards compatible coding of multichannel surround audio. Directional audio coding (DirAC) can be viewed as spatial audio coding designed around specific microphone front-ends. DirAC is based on B-format spatial sound analysis and has no direct stereo backwards compatibility. We are presenting a number of two capsule-based stereo compatible microphone front-ends and corresponding spatial audio encoder modifications that enable the use of spatial audio coders to directly capture and code surround sound.
Convention Paper 7508 (Purchase now)


Thursday, October 2, 9:00 am — 10:45 am

T1 - Electroacoustic Measurements


Presenter:
Christopher J. Struck, CJS Labs - San Francisco, CA, USA

Abstract:
This tutorial focuses on applications of electroacoustic measurement methods, instrumentation, and data interpretation as well as practical information on how to perform appropriate tests. Linear system analysis and alternative measurement methods are examined. The topic of simulated free field measurements is treated in detail. Nonlinearity and distortion measurements and causes are described. Last, a number of advanced tests are introduced.

This tutorial is intended to enable the participants to perform accurate audio and electroacoustic tests and provide them with the necessary tools to understand and correctly interpret the results.


Thursday, October 2, 9:00 am — 11:30 am

TT1 - San Francisco Conservatory of Music/Kirkegaard Associates


Abstract:
Founded in 1917, the SFCM is recognized as one of the world’s leading music schools. Its more than 1,200 students and faculty present over 1,500 public performances annually to 100,000+ Bay Area residents and visitors. The Conservatory’s $80 million teaching, performance, rehearsal and practice complex opened in the SF Civic Center in Sept. 2006. This tour will include a presentation by acoustical treatment experts Kirkegaard Associates.

Note: Maximum of 40 participants per tour.


Price: $30 (members), $40 (nonmembers)

Thursday, October 2, 9:00 am — 12:30 pm

P2 - Analysis and Synthesis of Sound


Chair: Hiroko Terasawa, Stanford University - Stanford, CA, USA

P2-1 Spatialized Additive Synthesis of Environmental SoundsCharles Verron, Orange Labs - Lannion, France, and Laboratoire de Mécanique et d’Acoustique, Marseille, France; Mitsuko Aramaki, Institut de Neurosciences Cognitives de la Méditerranée - Marseille, France; Richard Kronland-Martinet, Laboratoire de Mécanique et d’Acoustique - Marsielle, France; Grégory Pallone, Orange Labs - Lannion, France
In virtual auditory environment, sound sources are typically created in two stages: the “dry” monophonic signal is synthesized, and then, the spatial attributes (like source directivity, width, and position) are applied by specific signal processing algorithms. In this paper we present an architecture that combines additive sound synthesis and 3-D positional audio at the same level of sound generation. Our algorithm is based on inverse fast Fourier transform synthesis and amplitude-based sound positioning. It allows synthesizing and spatializing efficiently sinusoids and colored noise, to simulate point-like and extended sound sources. The audio rendering can be adapted to any reproduction system (headphones, stereo, 5.1, etc.). Possibilities offered by the algorithm are illustrated with environmental sound.
Convention Paper 7509 (Purchase now)

P2-2 Harmonic Sinusoidal + Noise Modeling of Audio Based on Multiple F0 EstimationMaciej Bartkowiak, Tomasz Zernicki, Poznan University of Technology - Poznan, Poland
This paper deals with the detection and tracking of multiple harmonic series. We consider a bootstrap approach based on prior estimation of F0 candidates and subsequent iterative adjustment of a harmonic sieve with simultaneous refinement of the F0 and inharmonicity factor. Experiments show that this simple approach is an interesting alternative to popular strategies, where partials are detected without harmonic constraints, and harmonic series are resolved from mixed sets afterwards. The most important advantage is that common problems of tonal/noise energy confusion in case of unconstrained peak detection are avoided. Moreover, we employ a popular LP-based tracking method that is generalized to dealing with harmonically related groups of partials by using a vector inner product as the prediction error measure. Two alternative extensions of the harmonic model are also proposed in the paper that result in greater naturalness of the reconstructed audio: an individual frequency deviation component and a complex narrowband individual amplitude envelope.
Convention Paper 7510 (Purchase now)

P2-3 Sound Extraction of Delackered RecordsOttar Johnsen Frédéric Bapst, Ecole d'ingenieurs et d'architectes de Fribourg - Fribourg, Switzerland; Lionel Seydoux, Connectis AG - Berne, Switzerand
Most direct cut records are made of an aluminum or glass plate with a coated acetate lacquer. Such records are often crackled due to the shrinkage of the coating. It is impossible to read such records mechanically. We are presenting here a technique to reconstruct the sound from such record by scanning the image of the record and combining the sound from the different parts of the "puzzle." The system has been tested by extracting sounds from sound archives in Switzerland and in Austria. The concepts will be presented as well as the main challenges. Extracted sound samples will be played.
Convention Paper 7511 (Purchase now)

P2-4 Parametric Interpolation of Gaps in Audio SignalsAlexey Lukin, Moscow State University - Moscow, Russia; Jeremy Todd, iZotope, Inc. - Cambridge, MA, USA
The problem of interpolation of gaps in audio signals is important for the restoration of degraded recordings. Following the parametric approach over a sinusoidal model recently suggested in JAES by Lagrange et al., this paper proposes an extension to this interpolation algorithm by considering interpolation of a noisy component in a “sinusoidal + noise” signal model. Additionally, a new interpolator for sinusoidal components is presented and evaluated. The new interpolation algorithm becomes suitable for a wider range of audio recordings than just interpolation of a sinusoidal signal component.
Convention Paper 7512 (Purchase now)

P2-5 Classification of Musical Genres Using Audio Waveform Descriptors in MPEG-7Nermin Osmanovic, Microsoft Corporation - Seattle, WA, USA
Automated genre classification makes it possible to determine the musical genre of an incoming audio waveform. One application of this is to help listeners find music they like more quickly among millions of tracks in an online music store. By using numerical thresholds and the MPEG-7 descriptors, a computer can analyze the audio stream for occurrences of specific sound events such as kick drum, snare hit, and guitar strum. The knowledge about sound events provides a basis for the implementation of a digital music genre classifier. The classifier inputs a new audio file, extracts salient features, and makes a decision about the musical genre based on the decision rule. The final classification results show a recognition rate in the range 75% to 94% for five genres of music.
Convention Paper 7513 (Purchase now)

P2-6 Loudness Descriptors to Characterize Programs and Music TracksEsben Skovenborg, TC Group Research - Risskov, Denmark; Thomas Lund, TC Electronic - Risskov, Denmark
We present a set of key numbers to summarize loudness properties of an audio segment, broadcast program, or music track: the loudness descriptors. The computation of these descriptors is based on a measurement of loudness level, such as specified by the ITU-R BS.1770. Two fundamental loudness descriptors are introduced: Center of Gravity and Consistency. These two descriptors were computed for a collection of audio segments from various sources, media, and formats. This evaluation demonstrates that the descriptors can robustly characterize essential properties of the segments. We propose three different applications of the descriptors: for diagnosing potential loudness problems in ingest material; as a means for performing a quality check, after processing/editing; or for use in a delivery specification.
Convention Paper 7514 (Purchase now)

P2-7 Methods for Identification of Tuning System in Audio Musical SignalsPeyman Heydarian, Lewis Jones, Allan Seago, London Metropolitan University - London, UK
The tuning system is an important aspect of a piece. It specifies the scale intervals and is an indicator of the emotions of a musical file. There is a direct relationship between musical mode and the tuning of a piece for modal musical traditions. So, the tuning system carries valuable information, which is worth incorporating into metadata of a file. In this paper different algorithms for automatic identification of the tuning system are presented and compared. In the training process, spectral and chroma average, and pitch histograms, are used to construct reference patterns for each class. The same is done for the testing samples and a similarity measure like the Manhattan distance classifies a piece into different tuning classes.
Convention Paper 7515 (Purchase now)

P2-8 “Roughometer”: Realtime Roughness Calculation and ProfilingJulian Villegas, Michael Cohen, University of Aizu - Aizu-Wakamatsu, Fukushima-ken, Japan
A software tool capable of determining auditory roughness in real-time is presented. This application, based on Pure-Data (Pd), calculates the roughness of audio streams using a spectral method originally proposed by Vassilakis. The processing speed is adequate for many real-time applications and results indicate limited but significant agreement with an Internet application of the chosen model. Finally, the usage of this tool is illustrated by the computation of a roughness profile of a musical composition that can be compared to its perceived patterns of “tension” and “relaxation.”
Convention Paper 7516 (Purchase now)


Thursday, October 2, 10:30 am — 1:00 pm

L1 - Sound Reinforcement of Acoustic Music


Chair:
Rick Chinn
Panelists:
Jim Brown, Audio Systems Group
Mark Frink
Dan Mortensen, Dansound Inc.
Jim van Bergen

Abstract:
Amplifying acoustic music is a touchy subject, especially with musicians. It can be done, and it can be done well. Taste, subtlety, and restraint are the keywords. This live sound event brings four successful practitioners of the art  with a discussion of what can make you successful, and what won't. There is one thing for sure: it's not rock-n-roll.


Thursday, October 2, 11:00 am — 1:00 pm

W2 - Archiving and Preservation for Audio Engineers


Chair:
Konrad Strauss
Panelists:
Chuck Ainlay
George Massenburg
John Spencer

Abstract:
The art of audio recording is 130 years old. Recordings from the late 1890s to the present day have been preserved thanks to the longevity of analog media, but can the same be said for today's digital recordings? Digital storage technology is transient in nature, making lifespan and obsolescence a significant concern. Additionally, digital recordings are usually platform specific; relying on the existence of unique software and hardware platforms, and the practice of nondestructive recording creates a staggering amount of data much of which is redundant or unneeded. This workshop will address the subject of best practices for storage and preservation of digital audio recordings and outline current thinking and archiving strategies from the home studio to the large production facility.


Thursday, October 2, 1:00 pm — 2:30 pm

Opening Ceremonies
Awards
Keynote Speech


Abstract:
Opening Remarks:
• Executive Director Roger Furness
• President Bob Moses
• Convention Cochairs John Strawn, Valerie Tyler
Program:
• AES Awards Presentation
• Introduction of Keynote Speaker
• Keynote Address by Chris Stone

Awards Presentation

Please join us as the AES presents special awards to those who have made outstanding contributions to the Society in such areas of research, scholarship, and publications, as well as other accomplishments that have contributed to the enhancement of our industry. The awardees are:

PUBLICATIONS AWARD: Roger S. Grinnip III
BOARD OF GOVERNORS AWARD: Jim Anderson, Peter Swarte
FELLOWSHIP AWARD: Jonathan Abel, Angelo Farina, Rob Maher, Peter Mapp, Christoph Musialik, Neil Shaw, Julius Smith, Gerald Stanley, Alexander Voishvillo, William Whitlock
SILVER MEDAL AWARD: Keith Johnson
GOLD MEDAL AWARD: George Massenburg
DISTINGUISHED SERVICE MEDAL AWARD: Jay McKnight

Keynote Speaker

Record Plant co-founder Chris Stone will explore new trends and opportunities in the music industry and what it takes to succeed in today's environment, including how to utilize networking and free services to reduce risk when starting a new small business. Speaking from his strengths as a business/marketing entrepreneur, Stone will focus on the artist’s need to develop a sophisticated approach to operating their own business and also how traditional engineers can remain relevant and play a meaningful role in the ongoing evolution of the recording industry. Stone’s keynote address is entitled: The Artist Owns the Industry.


Thursday, October 2, 2:30 pm — 4:30 pm

One on One Mentoring Session—Part 1


Abstract:
Students are invited to sign-up for an individual meeting with a distinguished mentor from the audio industry. The opportunity to sign up will be given at the end of the opening SDA meeting. Any remaining open spots will be posted in the student area. All students are encouraged to participate in this exciting and rewarding opportunity for individual discussion with industry mentors.


Thursday, October 2, 2:30 pm — 6:00 pm

TT2 - Dolby Laboratories, San Francisco


Abstract:
Visit legendary Dolby Laboratories’ headquarters while you are in San Francisco. Dolby, known for its more than 40 years of audio innovation and leadership, will showcase its latest technologies (audio and video) for high-definition packaged disc media and digital cinema. Demonstrations will take place in Dolby’s state-of-the-art listening rooms, and in their world-class Presentation Studio.

Dolby Laboratories (NYSE:DLB) is the global leader in technologies that are essential elements in the best entertainment experiences. Founded in 1965 and best known for high-quality audio and surround sound, Dolby innovations enrich entertainment at the movies, at home, or on the go. Visit http://www.dolby.com for more information.

Note: Maximum of 40 participants per tour.


Price: $30 (members), $40 (nonmembers)

Thursday, October 2, 2:30 pm — 4:30 pm

L2 - The SOTA of Designing Loudspeakers for Live Sound


Chair:
Tom Young, Electroacoustic Design Services
Panelists:
Tom Danley, Danley Sound Labs
Ales Dravinec, ADRaudio
Dave Gunness, Fulcrum Acoustic
Charlie Hughes, Excelsior Audio Design
Pete Soper, Meyer Sound

Abstract:
The loudspeakers we employ today for live sound (all levels, all types) are vastly improved over what we had on hand when R&R first exploded and pushed the limits of what was available back in the 1960s. Following a brief glimpse back in time (to provide a reality check on where we were when many of us started in this field) we will define where we are now. Along with advances made in enclosure design and fabrication, horn design, driver design, system engineering and fabrication, ergonomics and rigging, etc., we now are implementing various methods to improve the overall performance of the drivers and the loudspeaker systems we use, not to mention the advanced methods employed to optimize large systems, improve directivity, beam-steer, etc.

Much of this advancement, at least over the past 15 years or so, is directly related to our use of computers as a design tool for modeling, for complex measurements (both in the lab and in the field) as well as DSP for implementing various processing and monitoring functions. We will clarify what we can do with modern day loudspeakers/systems and where we still need to push further. We may even get our panelists to imagine where they believe we may be headed over the next 5–10 years.


Thursday, October 2, 2:30 pm — 4:30 pm

B3 - Considerations for Facility Design


Chair:
Paul McLane, Radio World
Panelists:
Sam Berkow, SIA Acoustics
William Hallisky, Meridian Design
John Storyk, Walters Storyk

Abstract:
A roundtable chat with design experts Sam Berkow, John Storyk, and William Hallinsky. We’ve modified the format of this popular session further to allow attendees to hear from several of today’s top facility designers in a more relaxed and less hurried format.

What makes for an exceptional facility? What are the top pitfalls of facility design? Bring your cup of coffee and share in the conversation as Radio World U.S. Editor in Chief Paul McLane talks with Sam Berkow of SIA Acoustics, John Storyk of Walters-Storyk Design Group, and William Hallinsky of Meridian Design Associates, Architects, to learn what leaders in radio/television broadcast and production studios are doing today in architectural, acoustic, and facility design.

How are the demands of today’s multi-platform broadcasters changing design of facilities? How do streaming, video for radio and new media affect the process? What does it really mean to say a facility is “green”? How should broadcasters handle cross-training? What are the most common pitfalls broadcasters should avoid in designing and budgeting for a facility? What key decisions must you make today to ensure that your fabulous new facility will still be doing the job in 10 or 20 years?


Thursday, October 2, 2:30 pm — 4:30 pm

T3 - Broadband Noise Reduction: Theory and Applications


Presenters:
Alexey Lukin, iZotope, Inc. - Boston, MA, USA
Jeremy Todd, iZotope, Inc. - Boston, MA, USA

Abstract:
Broadband noise reduction (BNR) is a common technique for attenuating background noise in audio recordings. Implementations of BNR have steadily improved over the past several decades, but the majority of them share the same basic principles. This tutorial discusses various techniques used in the signal processing theory behind BNR. This will include earlier methods of implementation such as broadband and multiband gates and compander-based systems for tape recording. In addition to explanation of the early methods used in the initial implementation of BNR, greater emphasis and discussion will be focused toward recent advances in more modern techniques such as spectral subtraction. These include multi-resolution processing, psychoacoustic models, and the separation of noise into tonal and broadband parts. We will compare examples of each technique for their effectiveness on several types of audio recordings.


Thursday, October 2, 2:30 pm — 4:30 pm

P4 - Acoustic Modeling and Simulation


Chair: Scott Norcross, Communications Research Centre - Ottawa, Ontario, Canada

P4-1 Application of Multichannel Impulse Response Measurement to Automotive AudioMichael Strauß, Fraunhofer Institute for Digital Media Technology - Ilmenau, Germany, and Technical University of Delft, Delft, The Netherlands; Diemer de Vries, Technical University of Delft - Delft, The Netherlands
Audio reproduction in small enclosures holds a couple of differences in comparison to conventional room acoustics. Today’s car audio systems meet sophisticated expectations but still the automotive listening environment delivers critical acoustic properties. During the design of such an audio system it is helpful to gain insight into the temporal and spatial distribution of the acoustic field's properties. Because room acoustic modeling software reaches its limits the use of acoustic imaging methods can be seen as a promising approach. This paper describes the application of wave field analysis based on a multichannel impulse response measurement in an automotive use case. Besides a suitable preparation of the theoretical aspects, the analysis method is used to investigate the acoustic wave field inside a car cabin.
Convention Paper 7521 (Purchase now)

P4-2 Multichannel Low Frequency Room Simulation with Properly Modeled Source Terms—Multiple Equalization ComparisonRyan J. Matheson, University of Waterloo - Waterloo, Ontario, Canada
At low frequencies unwanted room resonances in regular-sized rectangular listening rooms cause problems. Various methods for reducing these resonances are available including some multichannel methods. Thus with introduction of setups like 5.1 surround into home theater systems there are now more options available to perform active resonance control using the existing loudspeaker array. We focus primarily on comparing, separately, each step of loudspeaker placement and its effects on the response in the room as well as the effect of adding additional symmetrically placed loudspeakers in the rear to cancel out any additional room resonances. The comparison is done by use of a Finite Difference Time Domain (FDTD) simulator with focus on properly modeling a source in the simulation. A discussion about the ability of a standard 5.1 setup to utilize a multichannel equalization technique (without adding additional loudspeakers to the setup) and a modal equalization technique is later discussed.
Convention Paper 7522 (Purchase now)

P4-3 A Super-Wide-Range Microphone with Cardioid DirectivityKazuho Ono, Takehiro Sugimoto, Akio Ando, NHK Science and Technical Research Laboratories - Tokyo, Japan; Tomohiro Nomura, Yutaka Chiba, Keishi Imanaga, Sanken Microphone Co. Ltd. - Japan
This paper describes a super-wide-range microphone with cardioid directivity, which covers the frequency range up to 100 kHz. The authors have successfully developed the omni-directional microphone capable of picking up sounds of up to 100 kHz with low noise. The proposed microphone uses an omni-directional capsule adopted in the omni-directional super-wide-range microphone and a bi-directional capsule that is newly designed to fit the characteristics of the omni-directional one. The output signals of both capsules are synthesized as the output signals to achieve cardioid directivity. The measurement results show that the proposed microphone achieves wide frequency range up to 100 kHz, as well as low noise characteristics and excellent cardioid directivity.
Convention Paper 7523 (Purchase now)

P4-4 Methods and Limitations of Line Source SimulationStefan Feistel, Ahnert Feistel Media Group - Berlin, Germany; Ambrose Thompson, Martin Audio - High Wycombe, Bucks, UK; Wolfgang Ahnert, Ahnert Feistel Media Group - Berlin, Germany
Although line array systems are in widespread use today, investigations of the requirements and methods for accurate modeling of line sources are scarce. In previous publications the concept of the Generic Loudspeaker Library (GLL) was introduced. We show that on the basis of directional elementary sources with complex directivity data finite line sources can be simulated in a simple, general, and precise manner. We derive measurement requirements and discuss the limitations of this model. Additionally, we present a second step of refinement, namely the use of different directivity data for cabinets of identical type based on their position in the array. All models are validated by measurements. We compare the approach presented with other proposed solutions.
Convention Paper 7524 (Purchase now)


Thursday, October 2, 3:00 pm — 5:00 pm

Evolution of Video Game Sound


Moderator:
John Griffin, Marketing Director, Games, Dolby Laboratories - USA
Panelists:
Simon Ashby, Product Director, Audiokinetic - Canada
Will Davis, Audio Lead, Electronic Arts/Pandemic Studios - USA
Charles Deenen, Sr. Audio Director, Electronic Arts Black Box - Canada
Tom Hays, Director, Technicolor Interactive Services - USA

Abstract:
From the discrete-logic build of Pong to the multi-core processors of modern consoles, video game audio has made giant strides in complexity to a heightened level of immersion and user interactivity. Since its modest beginnings of monophonic bleeps to the high-resolution multichannel orchestrations and point-of-view audio panning, audio professionals have creatively stretched the envelopes of audio production techniques, as well as the game engine capabilities.

The panel of distinguished video game audio professionals will discuss audio production challenges of landmark game platforms, techniques used to maximize the video game audio experience, the dynamics leading to the modern video game soundtracks, and where the video game audio experience is heading.

This event has been organized by Gene Radzik, AES Historical Committee Co-Chair.


Thursday, October 2, 5:00 pm — 6:45 pm

M1 - Basic Acoustics: Understanding the Loudspeaker


Presenter:
John Vanderkooy, University of Waterloo - Waterloo, Ontario, Canada

Abstract:
This presentation is for AES members at an intermediate level and introduces many concepts in acoustics. The basic propagation of sound waves in air for both plane and spherical waves is developed and applied to the operation of a simple, sealed-box loudspeaker. Topics such as the acoustic impedance, compact source operation, and diffraction are included. Some live demonstrations with a simple loudspeaker; microphone and measuring computer are used to illustrate the basic radiation principle of a typical electrodynamic driver mounted in a sealed box.


Thursday, October 2, 5:00 pm — 6:45 pm

W5 - Engineering Mistakes We Have Made in Audio


Chair:
Peter Eastty, Oxford Digital Limited - UK
Panelists:
Robert Bristow-Johnson, Audio Imagination
James D. (JJ) Johnston, Neural Audio Corp.
Mel Lambert, Media & Marketing
George Massenburg, Massenburg Design Works
Jim McTigue, Impulsive Audio

Abstract:
Six leading audio product developers will share the enlightening, thought-provoking, and (in retrospect) amusing lessons they have learned from actual mistakes they have made in the product development trenches.


Friday, October 3, 9:00 am — 11:00 am

Perceptual Audio Coding—The First 20 Years


Moderator:
Marina Bosi, Stanford University; author of Introduction to Digital Audio Coding and Standards
Panelists:
Karlheinz Brandenburg, Fraunhofer Institute for Digital Media Technology; TU Ilmenau - Ilmenau, Germany
Bernd Edler, University of Hannover
Louis Fielder, Dolby Laboratories
J. D. Johnston, Neural Audio Corp. - Kirkland, WA, USA
John Princen, BroadOn Communications
Gerhard Stoll, IRT
Ken Sugiyama, NEC

Abstract:
Who would have guessed that teenagers and everybody else would be clamoring for devices with MP3/AAC (MPEG Layer III/MPEG Advanced Audio Coding) perceptual audio coders that fit into their pockets? As perceptual audio coders become more and more integral to our daily lives, residing within DVDs, mobile devices, broad/webcasting, electronic distribution of music, etc., a natural question to ask is: what made this possible and where is this going? This panel, which includes many of the early pioneers who helped advance the field of perceptual audio coding, will present a historical overview of the technology and a look at how the market evolved from niche to mainstream and where the field is heading.


Friday, October 3, 9:00 am — 11:00 am

One on One Mentoring Session—Part 2


Abstract:
Students are invited to sign-up for an individual meeting with a distinguished mentor from the audio industry. The opportunity to sign up will be given at the end of the opening SDA meeting. Any remaining open spots will be posted in the student area. All students are encouraged to participate in this exciting and rewarding opportunity for individual discussion with industry mentors.


Friday, October 3, 9:00 am — 11:00 am

M2 - Binaural Audio Technology—History, Current Practice, and Emerging Trends


Presenter:
Robert Schulein, Schaumburg, IL, USA

Abstract:
During the winter and spring of 1931-32, Bell Telephone Laboratories, in cooperation with Leopold Stokowski and the Philadelphia Symphony Orchestra, undertook a series of tests of musical reproduction using the most advanced apparatus obtainable at that time. The objectives were to determine how closely an acoustic facsimile of an orchestra could be approached using both stereo loudspeakers and binaural reproduction. Detailed documents discovered within the Bell Telephone archives will serve as a basis for describing the results and problems revealed while creating the binaural demonstrations. Since these historic events, interest in binaural recording and reproduction has grown in areas such as sound field recording, acoustic research, sound field simulation, audio for electronic games, music listening, and artificial reality. Each of theses technologies has its own technical concerns involving transducers, environmental simulation, human perception, position sensing, and signal processing. This Master Class will cover the underlying principles germane to binaural perception, simulation, recording, and reproduction. It will include live demonstrations as well as recorded audio/visual examples.


Friday, October 3, 9:00 am — 12:00 pm

TT3 - Center for New Music and Audio Technology, UC Berkeley


Abstract:
The UC Berkeley Center for New Music and Audio Technologies (CNMAT) houses programs in research, pedagogy, and public performance that are focused on the creative interaction between music and technology. CNMAT's pedagogy program is highly integrated with the Department of Music's graduate program in composition, while the research program is linked with other disciplines and departments on campus such as architecture, mathematics, statistics, mechanical engineering, computer science, electrical engineering, psychology, cognitive science, physics, space sciences, the Center for New Media, and the Department of Theater, Dance, and Performance Studies. Presenters David Wessel (Co-Director, CNMAT) and Adrian Freed (Research Director, CNMAT) will give an overview of CNMAT research projects. For more information, visit http://cnmat.berkeley.edu/.

All visitors are required to sign a Non-Disclosure Agreement to enter the facility.

Note: Maximum of 47 participants per tour.


Price: $35 (members), $45 (nonmembers)

Friday, October 3, 9:00 am — 11:30 am

T4 - Perceptual Audio Evaluation


Presenters:
Søren Bech, Bang & Olufsen A/S - Struer, Denmark
Nick Zacharov, SenseLab - Delta, Denmark

Abstract:
The aim of this tutorial is to provide an overview of perceptual evaluation of audio through listening tests, based on good practices in the audio and affiliated industries. The tutorial is aimed at anyone interested in the evaluation of audio quality and will provide a highly condensed overview of all aspects of performing listening tests in a robust manner. Topics will include: (1) definition of a suitable research question and associated hypothesis, (2) definition of the question to be answered by the subject, (3) scaling of the subjective response, (4) control of experimental variables such as choice of signal, reproduction system, listening room, and selection of test subjects, (5) statistical planning of the experiments, and (6) statistical analysis of the subjective responses. The tutorial will include both theory and practical examples including discussion of the recommendations of relevant international standards (IEC, ITU, ISO). The presentation will be made available to attendees and an extended version will be available in the form of the text “Perceptual Audio Evaluation" authored by Søren Bech and Nick Zacharov.


Friday, October 3, 9:00 am — 11:30 am

P5 - Audio Equipment and Measurements


Chair: John Vanderkooy, University of Waterloo - Waterloo, Ontario, Canada

P5-1 Can One Perform Quasi-Anechoic Loudspeaker Measurements in Normal Rooms?John Vanderkooy, Stanley Lipshitz, University of Waterloo - Waterloo, Ontario, Canada
This paper is an analysis of two methods that attempt to achieve high resolution frequency responses at low frequencies from measurements made in normal rooms. Such data is contaminated by reflections before the low-frequency impulse response of the system has fully decayed. By modifying the responses to decay more rapidly, then windowing a reflection-free portion, and finally recovering the full response by deconvolution, these quasi-anechoic methods purport to thwart the usual reciprocal uncertainty relationship between measurement duration and frequency resolution. One method works by equalizing the response down to dc, the other by increasing the effective highpass corner frequency of the system. Each method is studied with simulations, and both appear to work to varying degrees, but we question whether they are measurements or effectively simply model extensions. In practice noise significantly degrades both procedures.
Convention Paper 7525 (Purchase now)

P5-2 Automatic Verification of Large Sound Reinforcement Systems Using Models of Loudspeaker Performance DataKlas Dalbjörn, Johan Berg, Lab.gruppen AB - Kungsbacka, Sweden
A method is described to automatically verify individual loudspeaker integrity and confirm the proper configuration of amplifier-loudspeaker connections in sound reinforcement systems. Using impedance-sensing technology in conjunction with software-based loudspeaker performance modeling, the procedure verifies that the load presented at each amplifier output corresponds to impedance characteristics as described in the DSP system’s currently loaded model. Accurate verification requires use of load impedance models created by iterative testing of numerous loudspeakers.
Convention Paper 7526 (Purchase now)

P5-3 Bend RadiusStephen Lampen, Carl Dole, Shulamite Wan, Belden - San Francisco, CA, USA
Designers, installers, and system integrators, have many rules and guidelines to follow. Most of these are intended to maximize cable and equipment performance. Many of these are “rules-of-thumb,” simple guidelines, easy to remember, and often just as easily broken. One of these is the “rule-of-thumb” regarding the bending of cable, especially coaxial cable. Many may have heard the term “No tighter than ten times the diameter.” While this can be helpful, in a general way, there is a deeper and more complex question. What happens when you do bend cable? What if you have no choice? Often a specific choice of rack or configuration of equipment requires that cables be bent tighter than that recommendation. And what happens if you “unbend” a cable that has been damaged? Does it stay damaged or can it be restored? This paper outlines a series of laboratory tests to determine exactly what happens when cable is bent and what the reaction is. Further, we will analyze the effect of bending on cable performance, specifically looking at impedance variations and return loss (signal reflection). For high-definition video signals (HD-SDI) return loss is the key to maximum cable length, bit errors, and open eye patterns. So analyzing the effecting of bending will allow us to determine signal quality based on the bending of an individual cable. But does this apply to digital audio cables? Does the relatively low frequencies of AES digital signals make a difference? Can these cables be bent with less effect on performance? These tests were repeated on both coaxial cable of different sizes and twisted pairs. Flexible coax cables were tested, as well as the standard solid-core installation versions. Paired cables consisted of AES digital audio shielded cables, both install and flexible versions, were also tested.
Convention Paper 7527 (Purchase now)

P5-4 Detecting Changes in Audio Signals by Digital DifferencingBill Waslo, Liberty Instruments Inc. - Liberty Township, OH, USA
A software application has been developed to provide an accessible method, based on signal subtraction, to determine whether or not an audio signal may have been perceptibly changed by passing through components, cables, or similar processes or treatments. The goals of the program, the capabilities required of it, its effectiveness, and the algorithms it uses are described. The program is made freely available to any interested users for use in such tests.
Convention Paper 7528 (Purchase now)

P5-5 Research on a Measuring Method of Headphones and Earphones Using HATSKiyofumi Inanaga, Takeshi Hara, Sony Corporation - Tokyo, Japan; Gunnar Rasmussen, G.R.A.S. Sound & Vibration A/S - Copenhagen, Denmark; Yasuhiro Riko, Riko Associates - Tokyo, Japan
Currently various types of couplers are used for measurement of headphones and earphones. The coupler was selected according to the device under test by the measurer. Accordingly it was difficult to compare the characteristics of headphones and earphones. A measuring method was proposed using HATS and a simulated program signal. However, the method had some problems in the shape of ear hole, and the measured results were not reproducible. We tried to improve the reproducibility of the measurement using several pinna models. As a result, we achieved a measuring platform using HATS, which gives good reproducibility of measured results for various types of headphones and earphones and then makes it possible to compare the measured results fairly.
Convention Paper 7529 (Purchase now)


Friday, October 3, 9:00 am — 1:00 pm

P6 - Loudspeaker Design


Chair: Alexander Voishvillo, JBL Professional - Northridge, CA, USA

P6-1 Loudspeaker Production VarianceSteven Hutt, Equity Sound Investments - Bloomington, IN, USA; Laurie Fincham, THX Ltd. - San Rafael, CA, USA
Numerous quality assurance philosophies have evolved over the last few decades designed to manage manufacturing quality. Managing quality control of production loudspeakers is particularly challenging. Variation of subcomponents and assembly processes across loudspeaker driver production batches may lead to excessive variation of sensitivity, bandwidth, frequency response, and distortion characteristics, etc. As loudspeaker drivers are integrated into production audio systems these variants result in broad performance permutation from system to system that affects all aspects of acoustic balance and spatial attributes. This paper will discuss traditional electro-dynamic loudspeaker production variation.
Convention Paper 7530 (Purchase now)

P6-2 Distributed Mechanical Parameters Describing Vibration and Sound Radiation of Loudspeaker Drive UnitsWolfgang Klippel, University of Technology Dresden - Dresden, Germany; Joachim Schlechter, KLIPPEL GmbH - Dresden, Germany
—Wolfgang Klippel, University of Dresden, Dresden, Germany; Joachim Schlechter, Klippel GmbH, Dresden, Germany The mechanical vibration of loudspeaker drive units is described by a set of linear transfer functions and geometrical data that are measured at selected points on the surface of the radiator (cone, dome, diaphragm, piston, panel) by using a scanning technique. These distributed parameters supplement the lumped parameters (T/S, nonlinear, thermal), simplify the communication between cone, driver, and loudspeaker system design and open new ways for loudspeaker diagnostics. The distributed vibration can be summarized to a new quantity called accumulated acceleration level (AAL), which is comparable with the sound pressure level (SPL) if no acoustical cancellation occurs. This and other derived parameters are the basis for modal analysis and novel decomposition techniques that make the relationship between mechanical vibration and sound pressure output more transparent. Practical problems and indications for practical improvements are discussed for various example drivers. Finally, the usage of the distributed parameters within finite and boundary element analyses is addressed and conclusions for the loudspeaker design process are made.
Convention Paper 7531 (Purchase now)

P6-3 A New Methodology for the Acoustic Design of Compression Driver Phase-Plugs with Radial ChannelsMark Dodd, Celestion International Ltd. - Ipswich, UK,and GP Acousics (UK) Ltd., Maidstone, UK; Jack Oclee-Brown, GP Acousics (UK) Ltd. - Maidstone, UK, and University of Southampton, Southampton, UK
Recent work by the authors describes an improved methodology for the design of annular-channel, dome compression drivers. Although not so popular, radial channel phase plugs are used in some commercial designs. While there has been some limited investigation into the behavior of this kind of compression driver, the literature is much more extensive for annular types. In particular, the modern approach to compression driver design, based on a modal description of the compression cavity, as first pioneered by Smith, has no equivalent for radial designs. In this paper we first consider if a similar approach is relevant to radial-channel phase plug designs. The acoustical behavior of a radial-channel compression driver is analytically examined in order to derive a geometric condition that ensures minimal excitation of the compression cavity modes.
Convention Paper 7532 (Purchase now)

P6-4 Mechanical Properties of Ferrofluids in LoudspeakersGuy Lemarquand, Romain Ravaud, Valerie Lemarquand, Claude Depollier, Laboratoire d’Acoustique de l’Université du Maine - Le Mans, France
This paper describes the properties of ferrofluid seals in ironless electrodynamic loudspeakers. The motor consists of several outer stacked ring permanent magnets. The inner moving part is a piston. In addition, two ferrofluid seals are used that replace the classic suspension. Indeed, these seals fulfill several functions. First, they ensure the airtightness between the loudspeaker faces. Second, they act as bearings and center the moving part. Finally, the ferrofluid seals also exert a pull back force on the moving piston. Both radial and axial forces exerted on the piston are calculated thanks to analytical formulations. Furthermore, the shape of the seal is discussed as well as the optimal quantity of ferrofluid. The seal capacity is also calculated.
Convention Paper 7533 (Purchase now)

P6-5 An Ironless Low Frequency Subwoofer Functioning under its Resonance FrequencyBenoit Merit, Université du Maine - Le Mans, France, Orkidia Audio, Saint Jean de Luz, France; Guy Lemarquand, Université du Maine - Le Mans, France; Bernard Nemoff, Orkidia Audio - Saint Jean de Luz, France
A low frequency loudspeaker (10 Hz to 100 Hz) is described. Its structure is totally ironless in order to avoid nonlinear effects due to the presence of iron. The large diaphragm and the high force factor of the loudspeaker lead to its high efficiency. Efforts have been made for reducing the nonlinearities of the loudspeaker for a more accurate sound reproduction. In particular we have developed a motor totally made of permanent magnets, which create a uniform induction across the entire intended displacement of the coil. The motor linearity and the high force factor of this flat loudspeaker make it possible to function under its resonance frequency with great accuracy.
Convention Paper 7534 (Purchase now)

P6-6 Line Arrays with Controllable Directional Characteristics—Theory and PracticeLaurie Fincham, Peter Brown, THX Ltd. - San Rafael, CA, USA
A so-called arc line array is capable of providing directivity control. Applying simple amplitude shading can, in theory, provide good off-axis lobe suppression and constant directivity over a frequency range determined at low-frequencies by line length and at high-frequencies by driver spacing. Array transducer design presents additional challenges–the dual requirements of close spacing, for accurate high-frequency control, and a large effective radiating area, for good bass output, are incompatible with the use of multiple full-range drivers. A novel drive unit layout is proposed and theoretical and practical design criteria are presented for a two-way line with controllable directivity and virtual elimination of spatial aliasing. The PC-based array controller permits real-time changes in beam parameters for multiple overlaid beams.
Convention Paper 7535 (Purchase now)

P6-7 Loudspeaker Directivity Improvement Using Low Pass and All Pass FiltersCharles Hughes, Excelsior Audio Design & Services, LLC - Gastonia, NC, USA
The response of loudspeaker systems employing multiple drivers within the same pass band is often less than ideal. This is due to the physical separation of the drivers and their lack of proper acoustical coupling within the higher frequency region of their use. The resultant comb filtering is sometimes addressed by applying a low pass filter to one or more of the drivers within the pass band. This can cause asymmetries in the directivity response of the loudspeaker system. A method is presented to greatly minimize these asymmetries through the use of low pass and all pass filters. This method is also applicable as a means to extend the directivity control of a loudspeaker system to lower frequencies.
Convention Paper 7536 (Purchase now)

P6-8 On the Necessary Delay for the Design of Causal and Stable Recursive Inverse Filters for Loudspeaker EqualizationAvelino Marques, Diamantino Freitas, Polytechnic Institute of Porto - Porto, Portugal
The authors have developed and applied a novel approach to the equalization of non-minimum phase loudspeaker systems, based on the design of Infinite Impulse Response (recursive) inverse filters. In this paper the results and improvements attained on this novel IIR filter design method are presented. Special attention has been given to the delay of the equalized system. The boundaries to be posed on the search space of the delay for a causal and stable inverse filter, to be used in the nonlinear least squares minimization routine, are studied, identified, and related with the phase response of a test system and with the order of the inverse filter. Finally, these observations and relations are extended and applied to multi-way loudspeaker systems, demonstrating the connection of the lower and upper bounds of the delay with the loudspeaker’s crossover filters phase response and inverse filter order.
Convention Paper 7537 (Purchase now)


Friday, October 3, 10:00 am — 12:30 pm

TT4 - Paul Stubblebine Mastering/The Tape Project, San Francisco


Abstract:
A world-class mastering studio with credits that include classic recordings for The Grateful Dead and Santana and such new artists as Ferron, California Zephyr, and Jennifer Berezan. Now deeply involved with DVD as well as traditional audio mastering, the studio recently moved to a larger, full service Mission Street complex. The Tape Project remasters recordings for analog tape distribution.

Note: Maximum of 20 participants per tour.


Price: $30 (members), $40 (nonmembers)

Friday, October 3, 11:00 am — 1:00 pm

B5 - Loudness Workshop


Chair:
John Chester
Panelists:
Marvin Caesar, Aphex
James Johnston, Neural Audio Corp.
Thomas Lund, TC Electronic A/S
Andrew Mason, BBC
Robert Orban, Orban/CRL
Jeffery Riedmiller, Dolby Laboratories

Abstract:
New challenges and opportunities await broadcast engineers concerned about optimum sound quality in this contemporary age of multichannel sound and digital broadcasting. The earliest studies in the measurement of loudness levels were directed to telephony issues, with the publication in 1933 of the equal-loudness contours of Fletcher and Munson, and the Bell Labs tests of more than a half-million listeners at the 1938 New York Worlds Fair demonstrating that age and gender are also important factors in hearing response. A quarter of a century later, broadcasters began to take notice of the often-conflicting requirements of controlling both modulation and loudness levels. These are still concerns today as new technologies are being adopted. This session will explore the current state of the art in the measurement and control of loudness levels and look ahead to the next generation of techniques that may be available to audio broadcasters.


Friday, October 3, 11:30 am — 1:00 pm

P8 - Room Acoustics and Binaural Audio


P8-1 On the Minimum-Phase Nature of Head-Related Transfer FunctionsJuhan Nam, Miriam A. Kolar, Jonathan S. Abel, Stanford University - Stanford, CA, USA
For binaural synthesis, head-related transfer functions (HRTFs) are commonly implemented as pure delays followed by minimum-phase systems. Here, the minimum-phase nature of HRTFs is studied. The cross-coherence between minimum-phase and unprocessed measured HRTFs was seen to be greater than 0.9 for a vast majority of the HRTFs, and was rarely below 0.8. Non-minimum-phase filter components resulting in reduced cross-coherence appeared in frontal and ipsilateral directions. The excess group delay indicates that these non-minimum-phase components are associated with regions of moderate HRTF energy. Other regions of excess phase correspond to high-frequency spectral nulls, and have little effect on cross-coherence.
Convention Paper 7546 (Purchase now)

P8-2 Apparatus Comparison for the Characterization of SpacesAdam Kestian, Agnieszka Roginska, New York University - NY, NY, USA
This work presents an extension of the Acoustic Pulse Reflectometry (APR) methodology that was previously used to obtain the characteristics of smaller acoustic spaces. Upon reconstructing larger spaces, the geometric configuration and characteristics of the measurement apparatus can be directly related to the clarity of the results. This paper describes and compares three measurement setups and apparatus configurations. The advantages and disadvantages of each methodology are discussed.
Convention Paper 7547 (Purchase now)

P8-3 Quantifying the Effect of Room Response on Automatic Speech Recognition SystemsJeremy Anderson, John Harris, University of Florida - Gainesville, FL, USA
It has been demonstrated that the acoustic environment has an impact on timbre and speech intelligibility. Automatic speech recognition is an established area that suffers from the negative effects of mismatch between different room impulse responses (RIR). To better understand the changes imparted by the RIR, we have created synthetic responses to simulate utterances recorded in different locations. Using speech recognition techniques to quantify our results, we then looked for trends in performance to connect with impulse response changes.
Convention Paper 7548 (Purchase now)

P8-4 In Situ Determination of Acoustic Absorption CoefficientsScott Mallais, University of Waterloo - Waterloo, Ontario, Canada
The determination of absorption characteristics for a given material is developed for in situ measurements. Experiments utilize maximum length sequences and a single microphone. The sound pressure is modeled using the compact source approximation. Emphasis is placed on low frequency resolution that is dependent on the geometry of both the loudspeaker-microphone-sample configuration and the room in which the measurement is performed. Methods used to overcome this limitation are discussed. The concept of the acoustic center is applied in the low frequency region, modifying the calculation of the absorption coefficient.
Convention Paper 7549 (Purchase now)

P8-5 Head-Related Transfer Function Customization by Frequency Scaling and Rotation Shift Based on a New Morphological Matching MethodPierre Guillon, Laboratoire d’Acoustique de l’Université du Maine - Le Mans, France, and Orange Labs, Lannion, France; Thomas Guignard, Rozenn Nicol, Orange Labs - Lannion, France
Head-Related Transfer Functions (HRTFs) individualization is required to achieve high quality Virtual Auditory Spaces. An alternative to acoustic measurements is the customization of non-individual HRTFs. To transform HRTF data, we propose a combination of frequency scaling and rotation shifts, whose parameters are predicted by a new morphological matching method. Mesh models of head and pinnae are acquired, and differences in size and orientation of pinnae are evaluated with a modified Iterative Closest Point (ICP) algorithm. Optimal HRTF transformations are computed in parallel. A relatively good correlation between morphological and transformation parameters is found and allows one to predict the customization parameters from the registration of pinna shapes. The resulting model achieves better customization than frequency scaling only, which shows that adding the rotation degree of freedom improves HRTF individualization.
Convention Paper 7550 (Purchase now)


Friday, October 3, 12:00 pm — 2:30 pm

TT5 - Singer V7 Studios/Universal Audio, San Francisco


Abstract:
Four-time Emmy Award-Winning Composer/Producer/Performer/Writer, & Director Scott Singer has been at the cutting edge of technology-based entertainment for three decades. Most recently Mr. Singer was the Technical Musical Director and Assistant Director for the High-Definition DVD live recordings of Boz Scaggs’ Jazz Album and Greatest Hits, and the HD simulcast of the San Francisco Opera Rigoletto.

Now in its 24th year of operation, Singer Productions and Singer Studios V7 continues to serve as a state of the art recording facility for both Mr. Singer’s projects as well as many other talented recording artists and performers. Scott has just completed a full studio remodel (Version 7) adding the world’s first Bentley Edition Recording Suite—featuring a custom British mixing desk from John Oram/Trident, the “GP40,” as well as classic high-end components from Neve, SSL, Universal Audio, RCA, and GML.

State of the Emulation

Dave Berners will be co-presenting an answering questions on plug-in
emulations. They will demo UA gear, with in-studio live vocals by singer
Kyah Doran.

Note: Maximum of 30 participants per tour.


Price: $30 (members), $40 (nonmembers)

Friday, October 3, 1:00 pm — 2:00 pm

Lunchtime Keynote: Dave Giovannoni of First Sounds


Abstract:
Before Edison—Recovering the World's First Audio Recordings

First Sounds, an informal collaborative of audio engineers and historians, recently corrected the historical record and made international headlines by playing back a phonautogram made in Paris in April 1860—a ghostly, ten-second evocation of a French folk song. This and other phonautograms establish a forgotten French typesetter as the first person to record reproducible airborne sound 17 years before Edison invented the phonograph. Primitive and nearly accidental, the world’s first audio recordings pose a unique set of technical challenges. David Giovannoni of First Sounds discusses their recovery and restoration and will premiere two newly restored recordings.


Friday, October 3, 2:00 pm — 4:45 pm

Recording Competition - Surround


Abstract:
The Student Recording Competition is a highlight at each convention. A distinguished panel of judges participates in critiquing finalists of each category in an interactive presentation and discussion. Student members can submit stereo and surround recordings in the categories classical, jazz, folk/world music, and pop/rock. Meritorious awards will be presented at the closing Student Delegate Assembly Meeting on Sunday.


Friday, October 3, 2:00 pm — 6:00 pm

TT6 - Tarpan Studios/Ursa Minor Arts & Media, San Rafael


Abstract:
World-renowned producer/artist Narada Michael Walden has owned this gem-like studio for over twenty years. During that time artists such as Aretha Franklin, Whitney Houston, Mariah Carey, Steve Winwood, Kenny G, and Sting have recorded gold and platinum albums here. The tour will also include URSA Minor Arts & Media, an innovative web and multimedia production company.

Note: Maximum of 20 participants per tour.


Price: $35 (members), $45 (nonmembers)

Friday, October 3, 2:30 pm — 4:30 pm

Resume Review


Moderator:
John Strawn
Panelists:
Mark Dolson, Audience
Greg Duckett, Rane
Michael Poimboeuf, DigiDesign
Richard Wear, Interfacio

Abstract:
This session is aimed at job candidates in electrical engineering and computer science who want a private, no-cost, no-obligation confidential review of their resume. You can expect feedback such as: what is missing from the resume; what you should omit from the resume; how to strengthen your explanation of your talents and skills. Recent graduates, juniors, seniors, and graduate students who are now seeking, or will soon be seeking, a full-time employment position in the audio and music industries in hardware or software engineering will especially benefit from participating, but others with more industry experience are also invited. You will meet one-on-one with someone from a company in the audio and music industries with experience in hiring for R&D positions. Bring a paper copy of your resume and be prepared to take notes.


Friday, October 3, 2:30 pm — 4:00 pm

Compressors—A Dynamic Perspective


Moderator:
Fab Dupont
Panelists:
Dave Derr
Wade Goeke
Dave Hill
Hutch Hutchison
George Massenburg
Rupert Neve

Abstract:
A device that, some might say, is being abused by those involved in the “loudness wars,” the dynamic range compressor can also be a very creative tool. But how exactly does it work? Six of the audio industry’s top designers and manufacturers lift the lid on one of the key components in any recording, broadcast or live sound signal chain. They will discuss the history, philosophy and evolution of this often misunderstood processor. Is one compressor design better than another? What design features work best for what application? The panel will also reveal the workings behind the mysteries of feedback and feed-forward designs, side-chains, and hard and soft knees, and explore the uses of multiband, parallel and serial compression.


Friday, October 3, 2:30 pm — 4:15 pm

M3 - Sonic Methodology and Mythology


Presenter:
Keith O. Johnson - Pacifica, CA, USA

Abstract:
Do extravagant designs and superlative specifications satisfy sonic expectations? Can power cords, interconnects, marker dyes and other components in a controversial lineup improve staging, clarity, and other features? Intelligent measurements and neural feedback studies support these sonic issues as well as predict misdirected methodology from speculative thought. Sonic changes and perceptual feats to hear them are possible and we'll explore recorders, LPs, amplifiers, conversion, wire, circuits and loudspeakers to observe how they create artifacts and interact in systems. Hearing models help create and interpret tests intended to excite predictive behaviors of components. Time domain, tone cluster and fast sweep signals along with simple test devices reveal small complex artifacts. Background knowledge of halls, recording techniques, and cognitive perception becomes helpful to interpret results, which can reveal simple explanations to otherwise remarkable physics. Other topics include power amplifiers that can ruin a recording session, noise propagation from regulators, singing wire, coherent noise, eigensonics, and speakers prejudicial to key signatures. Waveform perception, tempo shifting, and learned object sounds will be demonstrated.


Friday, October 3, 2:30 pm — 4:30 pm

L6 - Source-Oriented Live Sound Reinforcement


Chair:
Fred Ampel, Technology Visions
Panelists:
Kurt Graffy, Arup
Dave Haydon, Out Board Electronics
George Johnsen, Threshold Digital Research Labs
Vikram Kirby, Thinkwell Design & Production
Robin Whittaker, Out Board Electronics

Abstract:
Directional amplification, also referred to as Source-Oriented Reinforcement (SOR), describes a practical technique to deliver amplified sound to a large listening area with even coverage while providing directional information to reinforce visual cues and create a realistic and non-contradictory auditory panorama. Audio demonstrations of the fundamental psychoacoustic techniques employed in a SOR design will be presented and limits discussed.

The panel of presenters will outline the history of SOR from the pioneering work of Ahnert, Steinke, and Fels with their Delta Stereophony System in the mid 1970s (later licensed to AKG), to Out Board’s current day TiMax Audio Imaging Delay Matrix, including the very latest ground breaking technology employed to enable control of precedence by radar tracking the actors on the stage.
 
Descriptions of venues and productions that have employed SOR will be included.


Friday, October 3, 2:30 pm — 5:00 pm

TT8 - Singer V7 Studios/Universal Audio, San Francisco


Abstract:
Four-time Emmy Award-Winning Composer/Producer/Performer/Writer, and Director Scott Singer has been at the cutting edge of technology-based entertainment for three decades. Most recently Mr. Singer was the Technical Musical Director and Assistant Director for the High-Definition DVD live recordings of Boz Scaggs’ Jazz Album and Greatest Hits, and the HD simulcast of the San Francisco Opera Rigoletto.

Now in its 24th year of operation, Singer Productions and Singer Studios V7 continues to serve as a state of the art recording facility for both Mr. Singer’s projects as well as many other talented recording artists and performers. Scott has just completed a full studio remodel (Version 7) adding the world’s first Bentley Edition Recording Suite—featuring a custom British mixing desk from John Oram/Trident, the “GP40,” as well as classic high-end components from Neve, SSL, Universal Audio, RCA, and GML.

State of the Emulation

Dave Berners will be co-presenting an answering questions on plug-in
emulations. They will demo UA gear, with in-studio live vocals by singer
Kyah Doran.

Note: Maximum of 30 participants per tour.


Price: $30 (members), $40 (nonmembers)

Friday, October 3, 2:30 pm — 6:30 pm

P9 - Multichannel Sound Reproduction


Chair: Durand Begault, NASA Ames Research Center - Mountain View, CA, USA

P9-1 An Investigation of 2-D Multizone Surround Sound SystemsMark Poletti, Industrial Research Limited - Lower Hutt, Wellington, New Zealand
Surround sound systems can produce a desired sound field over an extended region of space by using higher order Ambisonics. One application of this capability is the production of multiple independent soundfields in separate zones. This paper investigates multi-zone surround systems for the case of two-dimensional reproduction. A least squares approach is used for deriving the loudspeaker weights for producing a desired single frequency wave field in one of N zones. It is shown that reproduction in the active zone is more difficult when an inactive zone is in-line with the virtual sound source and the active zone. Methods for controlling this problem are discussed.
Convention Paper 7551 (Purchase now)

P9-2 Two-Channel Matrix Surround Encoding for Flexible Interactive 3-D Audio ReproductionJean-Marc Jot, Creative Advanced Technology Center - Scotts Valley, CA, USA
The two-channel matrix surround format is widely used for connecting the audio output of a video gaming system to a home theater receiver for multichannel surround reproduction. This paper describes the principles of a computationally-efficient interactive audio spatialization engine for this application. Positional cues including 3-D elevation are encoded for each individual sound source by frequency-independent interchannel phase and amplitude differences, rather than HRTF cues. A matrix surround decoder based on frequency-domain Spatial Audio Scene Coding (SASC) is able to faithfully reproduce both ambient reverberation and positional cues over headphones or arbitrary multichannel loudspeaker reproduction formats, while preserving source separation despite the intermediate encoding over only two channels.
Convention Paper 7552 (Purchase now)

P9-3 Is My Decoder Ambisonic?Aaron Heller, SRI International - Menlo Park, CA, USA; Richard Lee, Pandit Littoral - Cooktown, Queensland, Australia; Eric Benjamin, Dolby Laboratories - San Francisco, CA, USA
In earlier papers, the present authors established the importance of various aspects of Ambisonic decoder design: a decoding matrix matched to the geometry of the loudspeaker array in use, phase-matched shelf filters, and distance compensation. These are needed for accurate reproduction of spatial localization cues, such as interaural time difference (ITD), interaural level difference (ILD), and distance cues. Unfortunately, many listening tests of Ambisonic reproduction reported in the literature either omit the details of the decoding used or utilize suboptimal decoding. In this paper we review the acoustic and psychoacoustic criteria for Ambisonic reproduction; present a methodology and tools for "black box" testing to verify the performance of a candidate decoder; and present and discuss the results of this testing on some widely used decoders.
Convention Paper 7553 (Purchase now)

P9-4 Exploiting Human Spatial Resolution in Surround Sound Decoder DesignDavid Moore, Jonathan Wakefield, University of Huddersfield - West Yorkshire, UK
This paper presents a technique whereby the localization performance of surround sound decoders can be improved in directions in which human hearing is more sensitive to sound source location. Research into the Minimum Audible Angle is explored and incorporated into a fitness function based upon a psychoacoustic model. This fitness function is used to guide a heuristic search algorithm to design new Ambisonic decoders for a 5-speaker surround sound layout. The derived decoder is successful in matching the variation in localization performance of the human listener with better performance to the front and rear and reduced performance to the sides. The effectiveness of the standard ITU 5-speaker layout versus a non-standard layout is also considered in this context.
Convention Paper 7554 (Purchase now)

P9-5 Surround System Based on Three-Dimensional Sound Field ReconstructionFilippo M. Fazi, Philip A. Nelson, Jens E. Christensen, University of Southampton - Southampton, UK; Jeongil Seo, Electronics and Telecommunications Research Institute (ETRI) - Daejeon, Korea
The theoretical fundamentals and the simulated and experimental performance of an innovative surround sound system are presented. The proposed technology is based on the physical reconstruction of a three-dimensional target sound field over a region of the space using an array of loudspeakers surrounding the listening area. The computation of the loudspeaker gains includes the numerical or analytical solution of an integral equation of the first kind. The experimental setup and the measured reconstruction performance of a system prototype constituted by a three dimensional array of 40 loudspeakers are described and discussed.
Convention Paper 7555 (Purchase now)

P9-6 A Comparison of Wave Field Synthesis and Higher-Order Ambisonics with Respect to Physical Properties and Spatial SamplingSascha Spors, Jens Ahrens, Technische Universität Berlin - Berlin, Germany
Wave field synthesis (WFS) and higher-order Ambisonics (HOA) are two high-resolution spatial sound reproduction techniques aiming at overcoming some of the limitations of stereophonic reproduction techniques. In the past, the theoretical foundations of WFS and HOA have been formulated in a quite different fashion. Although some work has been published that aims at comparing both approaches their similarities and differences are not well documented. This paper formulates the theory of both approaches in a common framework, highlights the different assumptions made to derive the driving functions, and the resulting physical properties of the reproduced wave field. Special attention will be drawn to the spatial sampling of the secondary sources since both approaches differ significantly here.
Convention Paper 7556 (Purchase now)

P9-7 Reproduction of Virtual Sound Sources Moving at Supersonic Speeds in Wave Field SynthesisJens Ahrens, Sascha Spors, Technische Universität Berlin - Berlin, Germany
In conventional implementations of wave field synthesis, moving sources are reproduced as sequences of stationary positions. As reported in the literature, this process introduces various artifacts. It has been shown recently that these artifacts can be reduced when the physical properties of the wave field of moving virtual sources are explicitly considered. However, the findings were only applied to virtual sources moving at subsonic speeds. In this paper we extend the published approach to the reproduction of virtual sound sources moving at supersonics speeds. The properties of the actual reproduced sound field are investigated via numerical simulations.
Convention Paper 7557 (Purchase now)

P9-8 An Efficient Method to Generate Particle Sounds in Wave Field SynthesisMichael Beckinger, Sandra Brix, Fraunhofer Institute for Digital Media Technology - Ilmenau, Germany
Rendering a couple of virtual sound sources for wave field synthesis (WFS) in real time is nowadays feasible using the calculation power of state-of-the-art personal computers. If immersive atmospheres containing thousands of sound particles like rain and applause should be rendered in real time for a large listening area with a high spatial accuracy, calculation complexity increases enormously. A new algorithm based on continuously generated impulse responses and following convolutions, which renders many sound particles in an efficient way will be presented in this paper. The algorithm was verified by first listening tests and its calculation complexity was evaluated as well.
Convention Paper 7558 (Purchase now)


Friday, October 3, 2:30 pm — 5:00 pm

P10 - Nonlinearities in Loudspeakers


Chair: Laurie Fincham, THX Ltd. - San Rafael, CA, USA

P10-1 Audibility of Phase Response Differences in a Stereo Playback System. Part 2: Narrow-Band Stimuli in Headphones and LoudspeakersSylvain Choisel, Geoff Martin, Bang & Olufsen A/S - Struer, Denmark
An series of experiments were conducted in order to measure the audibility thresholds of phase differences between channels using mismatched cross-over networks. In Part 1 of this study, it was shown that listeners are able to detect very small inter-channel phase differences when presented with wide-band stimuli over headphones, and that the threshold was frequency dependent. This second part of the investigation focuses on listeners’ abilities with narrow-band signals (from 63 to 8000 Hz) in headphones as well as loudspeakers. The results confirm the frequency dependency of the audibility threshold over headphones, whereas for loudspeaker playback the threshold was essentially independent of the frequency.
Convention Paper 7559 (Purchase now)

P10-2 Time Variance of the Suspension NonlinearityFinn Agerkvist, Technical University of Denmark - Lyngby, Denmark; Bo Rhode Petersen, Aalborg University - Esbjerg, Denmark
It is well known that the resonance frequency of a loudspeaker depends on how it is driven before and during the measurement. Measurement done right after exposing it to high levels of electrical power and/or excursion giver lower values than what can be measured when the loudspeaker is cold. This paper investigates the changes in compliance the driving signal can cause, this includes low level short duration measurements of the resonance frequency as well as high power long duration measurements of the nonlinearity of the suspension. It is found that at low levels the suspension softens but recovers quickly. The high power and long term measurements affect the nonlinearity of the loudspeaker, by increasing the compliance value for all values of displacement. This level dependency is validated with distortion measurements and it is demonstrated how improved accuracy of the nonlinear model can be obtained by including the level dependency.
Convention Paper 7560 (Purchase now)

P10-3 A Study of the Creep Effect in Loudspeakers SuspensionFinn Agerkvist, Technical University of Denmark - Lyngby, Denmark; Knud Thorborg, Carsten Tinggaard, Tymphany A/S - Taastrup, Denmark
This paper investigates the creep effect, the visco elastic behavior of loudspeaker suspension parts, which can be observed as an increase in displacement far below the resonance frequency. The creep effect means that the suspension cannot be modeled as a simple spring. The need for an accurate creep model is even larger as the validity of loudspeaker models are now sought extended far into the nonlinear domain of the loudspeaker. Different creep models are investigated and implemented both in simple lumped parameter models as well as time domain nonlinear models, the simulation results are compared with a series of measurements on three version of the same loudspeaker with different thickness and rubber type used in the surround.
Convention Paper 7561 (Purchase now)

P10-4 The Influence of Acoustic Environment on the Threshold of Audibility of Loudspeaker ResonancesShelley Uprichard, Bang & Olufsen A/S - Struer, Denmark and University of Surrey, Guildford, Surrey, UK; Sylvain Choisel, Bang & Olufsen A/S - Struer, Denmark
Resonances in loudspeakers can produce a detrimental effect on sound quality. The reduction or removal of unwanted resonances has therefore become a recognized practice in loudspeaker tuning. This paper presents the results of a listening test that has been used to determine the audibility threshold of a single resonance in different acoustic environments: headphones, loudspeakers in a standard listening room, and loudspeakers in a car. Real loudspeakers were measured and the resonances modeled as IIR filters. Results show that there is a significant interaction between acoustic environment and program material.
Convention Paper 7562 (Purchase now)

P10-5 Confirmation of Chaos in a Loudspeaker System Using Time Series AnalysisJoshua Reiss, Queen Mary, University of London - London, UK; Ivan Djurek, Antonio Petosic, University of Zagreb - Zagreb, Croatia; Danijel Djurek, AVAC – Alessandro Volta Applied Ceramics, Laboratory for Nonlinear Dynamics - Zagreb, Croatia
The dynamics of an experimental electrodynamic loudspeaker is studied by using the tools of chaos theory and time series analysis. Delay time, embedding dimension, fractal dimension, and other empirical quantities are determined from experimental data. Particular attention is paid to issues of stationarity in the system in order to identify sources of uncertainty. Lyapunov exponents and fractal dimension are measured using several independent techniques. Results are compared in order to establish independent confirmation of low dimensional dynamics and a positive dominant Lyapunov exponent. We thus show that the loudspeaker may function as a chaotic system suitable for low dimensional modeling and the application of chaos control techniques.
Convention Paper 7563 (Purchase now)


Friday, October 3, 2:30 pm — 4:00 pm

P11 - Listening Tests & Psychoacoustics


P11-1 Testing Loudness Models—Real vs. Artificial ContentJames Johnston, Neural Audio Corp. - Kirkland, WA, USA
A variety of loudness models have been recently proposed and tested by various means. In this paper some basic properties of loudness are examined, and a set of artificial signals are designed to test the "loudness space" based on principles dating back to Harvey Fletcher, or arguably to Wegel and Lane. Some of these signals, designed to model "typical" content, seem to reinforce the results of prior loudness model testing. Other signals, less typical of standard content, seem to show that there are some substantial differences when these less common signals and signal spectra are used.
Convention Paper 7564 (Purchase now)

P11-2 Audibility of High Q-factor All-Pass Components in Head-Related Transfer FunctionsDaniela Toledo, Henrik Møller, Aalborg University - Aalborg, Denmark
Head-related transfer functions (HRTFs) can be decomposed into minimum phase, linear phase, and all-pass components. It is known that low Q-factor all-pass sections in HRTFs are audible as lateral shifts when the interaural group delay at low frequencies is above 30 µs. The goal of our investigation is to test the audibility of high Q-factor all-pass components in HRTFs and the perceptual consequences of removing them. A three-alternative forced choice experiment has been conducted. Results suggest that high Q-factor all-pass sections are audible when presented alone, but inaudible when presented with their minimum phase HRTF counterpart. It is concluded that high Q-factor all-pass sections can be discarded in HRTFs used for binaural synthesis.
Convention Paper 7565 (Purchase now)

P11-3 A Psychoacoustic Measurement and ABR for the Sound Signals in the Frequency Range between 10 kHz and 24 kHzMizuki Omata, Musashi Institute of Technology - Tokyo, Japan; Kaoru Ashihara, Advanced Industrial Science and Technology - Tsukuba, Japan; Motoki Koubori, Yoshitaka Moriya, Masaki Kyouso, Shogo Kiryu, Musashi Institute of Technology - Tokyo, Japan
In high definition audio media such as SACD and DVD-audio, wide frequency range far beyond 20 kHz is used. However, the auditory characteristics for the frequencies higher than 20 kHz have not been necessarily understood. At the first step to make clear the characteristics, we conducted a psychoacoustic and an auditory brain-stem response (ABR) measurement for the sound signals in the frequency range between 10 kHz and 24 kHz. At a frequency of 22 kHz, the hearing threshold in the psychoacoustic measurement could be measured for 4 of 5 subjects. The minimum sound pressure level was 80 dB. The thresholds of 100 dB in the ABR measurement could be measured for 1 of the 5 subjects.
Convention Paper 7566 (Purchase now)

P11-4 Quantifying the Strategy Taken by a Pair of Ensemble Hand-Clappers under the Influence of DelayNima Darabii, Peter Svensson, The Centre for Quantifiable Quality of Service in Communication Systems, NTNU - Trondheim, Norway; Snorre Farner, IRCAM - Paris, France
Pairs of subjects were placed in two acoustically isolated rooms clapping together under an influence of delay up to 68 ms. Their trials were recorded and analyzed based on a definition of compensation factor. This parameter was calculated from the recorded observations for both performers as a discrete function of time and thought of as a measure of the strategy taken by the subjects while clapping. The compensation factor was shown to have a strong individual as well as a fairly musical dependence. Increasing the delay compensation factor was shown obviously to be increased as it is needed to avoid tempo decrease for such high latencies. Virtual anechoic conditions cause a less deviation for this factor than the reverberant conditions. Slightly positive compensation parameter for very short latencies may lead to a tempo acceleration in accordance with Chafe effect.
Convention Paper 7567 (Purchase now)

P11-5 Quantitative and Qualitative Evaluations for TV Advertisements Relative to the Adjacent ProgramsEiichi Miyasaka, Akiko Kimura, Musashi Institute of Technology - Yokohama, Kanagawa, Japan
The sound levels of advertisements (CMs) in Japanese conventional terrestrial analog broadcasting (TAB) were quantitatively compared with those in Japanese terrestrial digital broadcasting (TDB). The results show that the average CM-sound level in TDB was about 2 dB lower and the average standard deviation was wider than those in TAB, while there were few differences between TAB and TDB at some TV station. Some CMs in TDB were perceived clearly louder than the adjacent programs although the sound level differences between the CMs and the programs were only within ±2 dB. Next, insertion methods of CMs into the main programs in Japan were qualitatively investigated. The results show that some kinds of the methods could unacceptably irritate viewers.
Convention Paper 7568 (Purchase now)


Friday, October 3, 4:00 pm — 6:45 pm

B6 - History of Audio Processing


Chair:
Emil Torick
Panelists:
Dick Burden
Marvin Caesar, Aphex
Glen Clark, Glen Clark & Associates
Mike Dorrough, Dorrough Electronics
Frank Foti, Omnia
Greg J. Ogonowski, Orban/CRL
Bob Orban, Orban/CRL
Eric Small, Modulation Sciences

Abstract:
The participants of this session pioneered audio processing and developed the tools we still use today. A discussion of the developments, technology, and the “Loudness Wars” will take place. This session is a must if you want to understand how and why audio processing is used.


Friday, October 3, 4:30 pm — 7:00 pm

TT9 - Singer V7 Studios/Universal Audio, San Francisco


Abstract:
Four-time Emmy Award-Winning Composer/Producer/Performer/Writer, & Director Scott Singer has been at the cutting edge of technology-based entertainment for three decades. Most recently Mr. Singer was the Technical Musical Director and Assistant Director for the High-Definition DVD live recordings of Boz Scaggs’ Jazz Album and Greatest Hits, and the HD simulcast of the San Francisco Opera Rigoletto.

Now in its 24th year of operation, Singer Productions and Singer Studios V7 continues to serve as a state of the art recording facility for both Mr. Singer’s projects as well as many other talented recording artists and performers. Scott has just completed a full studio remodel (Version 7) adding the world’s first Bentley Edition Recording Suite—featuring a custom British mixing desk from John Oram/Trident, the “GP40,” as well as classic high-end components from Neve, SSL, Universal Audio, RCA, and GML.

State of the Emulation

Dave Berners will be co-presenting an answering questions on plug-in
emulations. They will demo UA gear, with in-studio live vocals by singer
Kyah Doran.

Note: Maximum of 30 participants per tour.


Price: $30 (members), $40 (nonmembers)

Friday, October 3, 5:00 pm — 6:30 pm

P12 - Amplifiers and Automotive Audio


P12-1 Imperfections and Possible Advances in Analog Summing Amplifier DesignMilan Kovinic, MMK Instruments - Belgrade, Serbia; Dragan Drincic, Advanced School for Electrical & Computer Engineering - Belgrade, Serbia; Sasha Jankovic, OXYGEN-Digital, Parkgate Studio - Sussex, UK
The major requirement in the design of the analog summing amplifier is the quality of the summing bus. The key problem in most common designs is the artifact of summing bus impedance, which cannot be considered as true physical impedance, because it has been generated by negative feedback. The loop gain of the amplifier used will limit the performance at higher audio frequencies where the loop gain is lower, increasing the channels cross talk. The inevitable effect of heavy feedback is the increased susceptibility of the amplifier to oscillate as well as sensitivity to RFI. The advanced solution, presented in this paper, could be seen in the usage of the transistor common-base pair (CB-CB) configuration as a summing bus. The CB pair offers inherent low-input impedance, low-noise, very good frequency response, and, very importantly, makes the application of total feedback not necessarily.
Convention Paper 7569 (Purchase now)

P12-2 A Switchmode Power Supply Suitable for Audio Power AmplifiersJay Gordon, Factor One Inc. - Keyport, NJ, USA
Power supplies for audio amplifiers have different requirements than typical commercial power supplies. A tabulation of power supply parameters that affect the audio application is presented and discussed. Different types of audio amplifiers are categorized and shown to have different requirements. Over time new technologies have emerged that affect the implementation of AC to DC converters used in audio amplifiers. A brief history of audio power supply technology is presented. The evolution of the newly proposed interleaved boost with LLC resonant half bridge topology from preceding technologies is shown. The operation of the new topology is explained and its advantages are shown by a simulation of the circuit.
Convention Paper 7570 (Purchase now)

P12-3 On the Optimization of Enhanced CascodeDimitri Danyuk, Consultant - Miami, FL, USA
Twenty years ago enhanced cascode and other circuit topologies based on the same design principles were presented to audio amplifier designers. The circuit was supposed to be incorporated in transconductance gain stages and current sources. Enhanced cascode was used in some commercial products but have not received wide adoption. It was speculated that enhanced cascode has reduced phase margin and at times higher distortion being compared to conventional cascode. Enhanced cascode is analyzed on the basis of distortion and frequency response. It is shown how to make the most of enhanced cascode. Optimized novel circuit topology is presented.
Convention Paper 7571 (Purchase now)

P12-4 An Active Load and Test Method for Evaluating the Efficiency of Audio Power AmplifiersHarry Dymond, Phil Mellor, University of Bristol - Bristol, UK
This paper presents the design, implementation, and use of an “active load” for audio power amplifier efficiency testing. The active load can simulate linear complex loads representative of real-world amplifier operation with a load modulus between 4 and 50 ohms inclusive, load phase-angles between -60° and +60° inclusive, and operates from 20 to 20,000 Hz. The active load allows for the development of an automated test procedure for evaluating the efficiency of an audio power amplifier across a range of output voltage amplitudes, load configurations, and output signal frequencies. The results of testing a class-B and a class-D amplifier, each rated at 100 watts into 8 ohms, are presented.
Convention Paper 7572 (Purchase now)

P12-5 An Objective Method of Measuring Subjective Click-and-Pop Performance for Audio AmplifiersKymberly Christman (Schmidt), Maxim Integrated Products - Sunnyvale, CA, USA
Click-and-pop refers to any “clicks” and “pops” or other unwanted, audio-band transient signals that are reproduced by headphones or loudspeakers when the audio source is turned on or off. Until recently, the industry’s characterization of this undesirable effect has been almost purely subjective. Marketing phrases such as “low pop noise” and “clickless/popless operation” illustrate the subjectivity applied in quantifying click-and-pop performance. This paper presents a method that objectively quantifies this parameter, allowing meaningful, repeatable comparisons to be drawn between different components. Further, results of a subjective click-and-pop listening test are presented to provide a baseline for objectionable click-and-pop levels in headphone amplifiers.
Convention Paper 7573 (Purchase now)

P12-6 Effective Car Audio System Enabling Individual Signal Processing Operations of Coincident Multiple Audio Sources through Single Digital Audio Interface LineChul-Jae Yoo, In-Sik Ryu, Hyundai Autonet - South Korea
There are three major audio sources in recent car environments: primary audio (usually music including radio), navigation voice prompt, and hands-free voice. Listening situations in cars include not only listening to a single audio source, but also listening to concurrent multiple audio sources—for example, navigation guided as listening music and navigation guided or listening music as talking on a hands-free cell phone. In this paper a conventional external amplifier system connected with a head unit by three audio interface lines was introduced. Then, an effective automotive audio system having single SPDIF interface line that is capable of concurrent processing of the above three kinds of audio sources was proposed. The new system leads to a reduced wire harness in car environments and also increases voice qualities by transmitting voice signals via an SPDIF digital line compared with that via analog lines.
Convention Paper 7574 (Purchase now)

P12-7 Digital Equalization of Automotive Sound Systems Employing Spectral Smoothed FIR FiltersMarco Binelli, Angelo Farina, University of Parma - Parma, Italy
In this paper we investigate the usage of spectral smoothed FIR filters for equalizing a car audio system. The target is also to build short filters that can be processed on DSP processors with limited computing power. The inversion algorithm is based on the Nelson-Kirkeby method and on independent phase and magnitude smoothing, by means of a continuous phase method as Panzer and Ferekidis showd. The filter is aimed to create a "target" frequency response, not necessarily flat, employing a short number of taps and maintaining good performances everywhere inside the car's cockpit. As shown also by listening tests, smoothness, and the choice of the right frequency response increase the performances of the car audio systems.
Convention Paper 7575 (Purchase now)

P12-8 Implementation of a Generic Algorithm on Various Automotive PlatformsThomas Esnault, Jean-Michel Raczinski, Arkamys - Paris, France
This paper describes a methodology to adapt a generic automotive algorithm to various embedded platforms while keeping the same audio rendering. To get over the limitations of the target DSPs, we have developed tools to control the transition from one platform to another including algorithm adaptation and coefficients computing. Objective and subjective validation processes allow us to certify the quality of the adaptation. With this methodology, productivity has been increased in an industrial context.
Convention Paper 7576 (Purchase now)

P12-9 Advanced Audio Algorithms for a Real Automotive Digital Audio SystemStefania Cecchi, Lorenzo Palestini, Paolo Peretti, Emanuele Moretti, Francesco Piazza, Università Politecnica delle Marche - Ancona, Italy; Ariano Lattanzi, Ferruccio Bettarelli, Leaff Engineering - Porto Potenza Picena (MC), Italy
In this paper an innovative modular digital audio system for car entertainment is proposed. The system is based on a plug-in-based software (real-time) framework allowing reconfigurability and flexibility. Each plug-in is dedicated to a particular audio task such as equalization and crossover filtering, implementing innovative algorithms. The system has been tested on a real car environment, with a hardware platform comprising professional audio equipments, running on a PC. Informal listening tests have been performed to validate the overall audio quality, and satisfactory results were obtained.
Convention Paper 7577 (Purchase now)


Friday, October 3, 7:00 pm — 8:30 pm

Heyser Lecture
followed by
Technical Council
Reception


Abstract:
The Richard C. Heyser distinguished lecturer for the 125thAES Convention is Floyd Toole. Toole studied electrical engineering at the University of New Brunswick and at the Imperial College of Science and Technology, University of London, where he received a Ph.D. In 1965 he joined the National Research Council of Canada, where he reached the position of Senior Research Officer in the Acoustics and Signal Processing Group. In 1991, he joined Harman International Industries, Inc. as Corporate Vice President – Acoustical Engineering. In this position he worked with all Harman International companies, and directed the Harman Research and Development Group, a central resource for technology development and subjective measurements, retiring in 2007.

Toole’s research has focused on the acoustics and psychoacoustics of sound reproduction in small rooms, directed to improving engineering measurements, objectives for loudspeaker design and evaluation, and techniques for reducing variability at the loudspeaker/room/listener interface. For papers on these subjects he has received two AES Publications Awards and the AES Silver Medal. He is a Fellow and Past President of the AES and a Fellow of the Acoustical Society of America. In September, 2008, he was awarded the CEDIA Lifetime Achievement Award. He has just completed a book Sound Reproduction: Loudspeakers and Rooms (Focal Press, 2008). The title of his lecture is, “Sound Reproduction: Where We Are and Where We Need to Go.”

Over the past twenty years scientific research has made considerable progress in identifying the significant variables in sound reproduction and in clarifying the psychoacoustic relationships between measurements and perceptions. However, this knowledge is not widespread, and the audio industry remains burdened by unsubstantiated practices and folklore. Oft repeated beliefs can have status and influence commensurate with scientific facts.

One problem has been that much of the essential data was obscured by disorder: the knowledge was buried in papers in numerous books and journals, indexed under many different topics, and sometimes a key point was peripheral to the main subject of the paper. Assembling and organizing the information was the purpose of my recent book, Sound Reproduction (Focal Press, 2008). It turns out that we know a great deal about the acoustics and psychoacoustics of loudspeakers in small rooms, and this knowledge provides substantial guidance about designing and integrating systems to provide high quality sound reproduction.

However, what we hear over these installations is of variable sound quality and, more importantly, not always what was intended by the artists. Inconsistent and imperfect devices and practices in both the professional and consumer domains result in mismatches between recording and playback. Standards exist but are not often used. Many of them are fundamentally flawed. If we in the audio industry are serious about our mission to deliver the aural art in music and movies, as it was created, to consumers, there is work to be done. It begins with agreeing on the objectives, and is followed by an application of the science we know.


Saturday, October 4, 9:00 am — 11:00 am

Career/Job Fair


Abstract:
The Career/Job Fair will feature several companies from the exhibit floor. All attendees of the convention, students and professionals alike, are welcome to come visit with representatives from the companies and find out more about job and internship opportunities in the audio industry. Bring your resume!

Click here to reserve a table at this event.


Saturday, October 4, 9:00 am — 10:45 am

L7 - 10 Things to Get Right in PA and Sound Reinforceent


Chair:
Peter Mapp

Abstract:
This Live Sound Event will discuss the 10 most important things to get right when designing/operating sound reinforcement and PA systems. However, as attendees at the event will learn, there are many more things to consider than just the 10 golden rules, and that the order of importance of these often changes depending upon the venue and type of system. We aim to provide a practical approach to sound systems design and operation and will be illustrated with many practical examples and case histories. Each panelist has many years of practical experience and between them can cover just about any aspect of sound reinforcement and PA systems design, operation, and technology. Come along to an event that aims to answer questions you never knew you had—but of course, to find out the 10 most important ones, you will have to attend the session!


Saturday, October 4, 9:00 am — 10:30 am

W9 - Low Frequency Acoustic Issues in Small Critical Listening Environments - Today's Audio Production Rooms


Chair:
John Storyk
Panelists:
Renato Cipriano
Dave Kotch

Abstract:
Increasing real estate costs coupled with the reduced size of current audio control room equipment have dramatically impacted the current generation of recording studios. Small room environments (those under 300 s.f.) are now the norm for studio design. These rooms, particularly in view of current 5.1 audio requirements, create special challenges, associated with low frequency audio response in an ever expanding listening sweet spot. Real world conditions and result data will be presented for Ovesan Studios (New York), Roc the Mic Studios (New York), and Diante Do Trono (Brazil).


Saturday, October 4, 9:00 am — 10:30 am

T10 - New Technologies for Up to 7.1 Channel Playback in Any Game Console Format


Presenter:
Geir Skaaden, Neural Audio Corp. - Kirkland, WA, USA

Abstract:
This tutorial investigates methods for increasing the number of audio channels in a gaming console beyond its current hardware limitations. The audio engine within a game is capable of creating a 360 8 environment, however, the console hardware uses only a few channels to represent this world. If home playback systems are commonly able to reproduce up to 7.1 channels, how do game developers increase the number of playback channels for a platform that is limited to 2 or 5 outputs? New encoding technologies make this possible. Descriptions of current methods will be made in addition to new console independent technologies that run within the game engine. Game content will be used to demonstrate the encode/decode process.


Saturday, October 4, 9:00 am — 10:30 am

P13 - Spatial Perception


Chair: Richard Duda, San Jose State University - San Jose, CA, USA

P13-1 Individual Subjective Preferences for the Relationship between SPL and Different Cinema Shot SizesRoberto Munoz, U. Tecnológica de Chile INACAP - Santiago, Chile; Manuel Recuero, Universidad Politécnica de Madrid - Madrid, Spain; Manuel Gazzo, Diego Duran, U. Tecnológica de Chile INACAP - Santiago, Chile
The main motivation of this study was to find Individual Subjective Preferences (ISP) for the relationship between SPL and different cinema shot sizes. By means of the psychophysical method of Adjustment (MA), the preferred SPL for four of the most frequently used shot sizes, i.e., long shot, medium shot, medium close-up, and close-up, was subjectively quantified.
Convention Paper 7578 (Purchase now)

P13-2 Improvements to a Spherical Binaural Capture Model for Objective Measurement of Spatial Impression with Consideration of Head MovementsChungeun Kim, Russell Mason, Tim Brookes, University of Surrey - Guildford, Surrey, UK
This research aims, ultimately, to develop a system for the objective evaluation of spatial impression, incorporating the finding from a previous study that head movements are naturally made in its subjective evaluation. A spherical binaural capture model, comprising a head-sized sphere with multiple attached microphones, has been proposed. Research already conducted found significant differences in interaural time and level differences, and cross-correlation coefficient, between this spherical model and a head and torso simulator. It is attempted to lessen these differences by adding to the sphere a torso and simplified pinnae. Further analysis of the head movements made by listeners in a range of listening situations determines the range of head positions that needs to be taken into account. Analysis of these results inform the optimum positioning of the microphones around the sphere model.
Convention Paper 7579 (Purchase now)

P13-3 Predicting Perceived Off-Center Sound Degradation in Surround Loudspeaker Setups for Various Multichannel Microphone TechniquesNils Peters, Bruno Giordano, Sungyoung Kim, McGill University - Montreal, Quebec, Canada; Jonas Braasch, Rensselaer Polytechnic Institute - Troy, NY, USA; Stephen McAdams, McGill University - Montreal, Quebec, Canada
Multiple listening tests were conducted to examine the influence of microphone techniques on the quality of sound reproduction. Generally, testing focuses on the central listening position (CLP), and neglects off-center listening positions. Exploratory tests focusing on the degradation in sound quality at off-center listening positions were presented at the 123rd AES Convention. Results showed that the recording technique does influence the degree of sound degradation at off-center positions. This paper focuses on the analysis of the binaural re-recording at the different listening positions in order to interpret the results of the previous listening tests. Multiple linear regression is used to create a predictive model which accounts for 85% of the variance in the behavioral data. The primary successful predictors were spectral and the secondary predictors were spatial in nature.
Convention Paper 7580 (Purchase now)


Saturday, October 4, 9:00 am — 12:00 pm

P14 - Listening Tests & Psychoacoustics


Chair: Poppy Crum, Johns Hopkins University - Baltimore, MD, USA

P14-1 Rapid Learning of Subjective Preference in EqualizationAndrew Sabin, Bryan Pardo, Northwestern University - Evanston, IL, USA
We describe and test an algorithm to rapidly learn a listener’s desired equalization curve. First, a sound is modified by a series of equalization curves. After each modification, the listener indicates how well the current sound exemplifies a target sound descriptor (e.g., “warm”). After rating, a weighting function is computed where the weight of each channel (frequency band) is proportional to the slope of the regression line between listener responses and within-channel gain. Listeners report that sounds generated using this function capture their intended meaning of the descriptor. Machine ratings generated by computing the similarity of a given curve to the weighting function are highly correlated to listener responses, and asymptotic performance is reached after only ~25 listener ratings.
Convention Paper 7581 (Purchase now)

P14-2 An Initial Validation of Individualized Crosstalk Cancellation Filters for Binaural Perceptual ExperimentsAlastair Moore, Anthony Tew, University of York - York, UK; Rozenn Nicol, France Télécom R&D - Lannion, France
Crosstalk cancellation provides a means of delivering binaural stimuli to a listener for psychoacoustic research that avoids many of the problems of using headphone in experiments. The aim of this study was to determine whether individual crosstalk cancellation filters can be used to present binaural stimuli, which are perceptually indistinguishable from a real sound source. The fast deconvolution with frequency dependent regularization method was used to design crosstalk cancellation filters. The reproduction loudspeakers were positioned at ±90-degrees azimuth and the synthesized location was 0-degrees azimuth. Eight listeners were tested with three types of stimuli. In twenty-two out of the twenty-four listener/stimulus combinations there were no perceptible differences between the real and virtual sources. The results suggest that this method of producing individualized crosstalk cancellation filters is suitable for binaural perceptual experiments.
Convention Paper 7582 (Purchase now)

P14-3 Reverberation Echo Density PsychoacousticsPatty Huang, Jonathan S. Abel, Hiroko Terasawa, Jonathan Berger, Stanford University - Stanford, CA, USA
A series of psychoacoustic experiments were carried out to explore the relationship between an objective measure of reverberation echo density, called the normalized echo density (NED), and subjective perception of the time-domain texture of reverberation. In one experiment, 25 subjects evaluated the dissimilarity of signals having static echo densities. The reported dissimilarities matched absolute NED differences with an R2 of 93%. In a 19-subject experiment, reverberation impulse responses having evolving echo densities were used. With an R2 of 90% the absolute log ratio of the late field onset times matched reported dissimilarities between impulse responses. In a third experiment, subjects reported breakpoints in the character of static echo patterns at NED values of 0.3 and 0.7.
Convention Paper 7583 (Purchase now)

P14-4 Optimal Modal Spacing and Density for Critical ListeningBruno Fazenda, Matthew Wankling, University of Huddersfield - Huddersfield, West Yorkshire, UK
This paper presents a study on the subjective effects of modal spacing and density. These are measures often used as indicators to define particular aspect ratios and source positions to avoid low frequency reproduction problems in rooms. These indicators imply a given modal spacing leading to a supposedly less problematic response for the listener. An investigation into this topic shows that subjects can identify an optimal spacing between two resonances associated with a reduction of the overall decay. Further work to define a subjective counterpart to the Schroeder Frequency has revealed that an increase in density may not always lead to an improvement, as interaction between mode-shapes results in serious degradation of the stimulus, which is detectable by listeners.
Convention Paper 7584 (Purchase now)

P14-5 The Illusion of Continuity Revisited on Filling Gaps in the Saxophone SoundPiotr Kleczkowski, AGH University of Science and Technology - Cracow, Poland
Some time-frequency gaps were cut from a recording of a motif played legato on the saxophone. Subsequently, the gaps were filled with various sonic material: noises and sounds of an accompanying band. The quality of the saxophone sound processed in this way was investigated by listening tests. In all of the tests, the saxophone seemed to continue through the gaps, an impairment in quality being observed as a change in the tone color or an attenuation of the sound level. There were two aims of this research. First, to investigate whether the continuity illusion contributed to this effect, and second, to discover what kind of sonic material filling the gaps would cause the least deterioration in sound quality.
Convention Paper 7585 (Purchase now)

P14-6 The Incongruency Advantage for Sounds in Natural ScenesBrian Gygi, Veterans Affairs Northern California Health Care System - Martinez, CA, USA; Valeriy Shafiro, Rush University Medical Center - Chicago, IL, USA
This paper tests identification of environmental sounds (dogs barking or cars honking) in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the background scenes and the sounds to be identified showed a significant advantage of about 5% better identification accuracy for sounds that were incongruous with the background scene (e.g., a rooster crowing in a hospital). Studies with naïve listeners showed this effect is level-dependent: there is no advantage for incongruent sounds up to a Sound/Scene ratio (So/Sc) of –7.5 dB, after which there is again about 5% better identification. Modeling using spectral-temporal measures showed that saliency based on acoustic features cannot account for this difference.
Convention Paper 7586 (Purchase now)


Saturday, October 4, 9:00 am — 10:30 am

P15 - Loudspeakers—Part 1


P15-1 Advanced Passive Loudspeaker ProtectionScott Dorsey, Kludge Audio - Williamsburg, VA, USA
In a follow-on to a previous conference paper (AES Convention Paper 5881), the author explores the use of polymeric positive temperature coefficient (PPTC) protection devices that have a discontinuous I/V curve that is the result of a physical state change. He gives a simple model for designing networks employing incandescent lamps and PPTC devices together to give linear operation at low levels while providing effective limiting at higher levels to prevent loudspeaker damage. Some discussion of applications in current service is provided.
Convention Paper 7588 (Purchase now)

P15-2 Target Modes in Moving Assemblies of Compression Drivers and Other LoudspeakersFernando Bolaños, Pablo Seoane, Acústica Beyma S.A. - Valencia, Spain
This paper deals with how the important modes in a moving assembly of compression drivers and other loudspeakers can be found. Dynamic importance is an essential tool for those who work on modal analysis of systems with many degrees of freedom and complex structures. The important modes calculation or measurement in moving assemblies is an objective (absolute) method to find the relevant modes that act on the dynamics of these transducers. Our paper discusses axial modes and breath modes, which are basic for loudspeakers. The model generalized masses and the participation factors are useful tools to find the moving assemblies important modes (target modes). The strain energy of the moving assembly, which represents the amount of available potential energy, is essential as well.
Convention Paper 7589 (Purchase now)

P15-3 Determining Manufacture Variation in Loudspeakers Through Measurement of Thiele/Small ParametersScott Laurin, Karl Reichard, Pennsylvania State University - State College, PA, USA
Thiele/Small parameters have become a standard for characterizing loudspeakers. Using fairly straightforward methods, the Thiele/Small parameters for twenty nominally identical loudspeakers were determined. The data were compiled to determine the manufacturing variations. Manufacturing tolerances can have a large impact on the variability and quality of loudspeakers produced. Generally, when more stringent tolerances are applied, there is less variation and drivers become more expensive. Now that the loudspeakers have been characterized, each one will be driven to failure. Some loudspeakers will be intentionally degraded to accelerate failures. The goal is to correlate variation in the Thiele/Small parameters with variation in speaker failure modes and operating life.
Convention Paper 7590 (Purchase now)

P15-4 About Phase Optimization in Multitone Excitations Delphine Bard, Vincent Meyer, University of Lund - Lund, Sweden
Multitone signals are often used as excitation for the characterization of audio systems. The frequency spectrum of the response consists of harmonics of the frequencies contained in the excitation and intermodulation products. Besides the choice of frequencies, in order to avoid frequency overlapping, there is also the need to chose adequate magnitudes and phases for the different components that constitute the multitone signal. In this paper we will investigate how the choice of the phases will impact the properties of the multitone signal, but also how it will affect the performances of a compensation method based on Volterra kernels and using multitone signals as an excitation.
Convention Paper 7591 (Purchase now)

P15-5 Viscous Friction and Temperature Stability of the Mid-High Frequency LoudspeakerIvan Djurek, Antonio Petosic, University of Zagreb - Zagreb, Croatia; Danijel Djurek, Alessandro Volta Applied Ceramics (AVAC) - Zagreb, Croatia
Mid-high frequency loudspeakers behave quite differently as compared to low-frequency units, regarding effects coming from the surrounding air medium. Previous work stressed high influence of the imaginary part of the viscous force, which significantly affects the resonance frequency of mid-high frequency loudspeakers. Viscous force is relatively highly dependent on temperature and humidity of the surrounding air, and in this paper we have evaluated how changes in temperature and humidity reflect to the loudspeaker's linearity, which may be significant for the quality of sound reproduction.
Convention Paper 7592 (Purchase now)

P15-6 Calorimetric Evaluation of Intrinsic Friction in the Loudspeaker MembraneAntonio Petosic, Ivan Djurek, University of Zagreb - Zagreb, Croatia; Danijel Djurek, Alessandro Volta Applied Ceramics (AVAC) - Zagreb, Croatia
Friction losses in the vibrating system of an electrodynamic loudspeaker are represented by the intrinsic friction Ri, which enters the equation of motion, and these losses are accompanied by irreversible release of the heat. A method is proposed for measurement of the friction losses in the loudspeaker's membrane by measurement of the thermocouple temperature probe glued to the membrane. Temperature on the membrane surface fluctuates stochastically as a result of thermo-elastic coupling in the membrane material. Evaluation of the amplitude in the temperature fluctuations enables an absolute and direct evaluation of intrinsic friction Ri entering friction force F=Ri·?(x), irrespective of the nonlinearity type and strength associated with the loudspeaker operation.
Convention Paper 7593 (Purchase now)

P15-7 Phantom Powering the Modern Condenser Microphone: A Practical Look at Conditions for Optimized PerformanceMark Zaim, Tadashi Kikutani, Jackie Green, Audio-Technica U.S., Inc.
Phantom Powering a microphone is a decades old concept with powering conventions and methods that may have become obsolete, ineffective, or inefficient. Modern sound techniques, including those of live sound settings, now use many condenser microphones in settings that were previously dominated by dynamics. As a prerequisite for considering a modern phantom power specification or method, we study the efficiencies and requirements of microphones in typical multiple mic and high SPL settings in order to gain understanding of circuit and design requirements for the maximum dynamic range performance.
Convention Paper 7594 (Purchase now)


Saturday, October 4, 10:00 am — 11:00 am

Listening Session


Abstract:
Students are encouraged to bring in their projects to a non-competitive listening session for feedback and comments from Dave Greenspan, a panel, and audience. Students will be able to sign up at the first SDA meeting for time slots. Students who are finalists in the Recording competition are excluded from this event to allow others who were not finalists the opportunity for feedback.


Saturday, October 4, 10:30 am — 1:00 pm

P16 - Spatial Audio Quality with Playback Demonstration on Sunday 9:00 am – 10:00 am


Chair: Francis Rumsey, University of Surrey - Guildford, Surrey, UK

P16-1 QESTRAL (Part 1): Quality Evaluation of Spatial Transmission and Reproduction Using an Artificial ListenerFrancis Rumsey, Slawomir Zielinski, Philip Jackson, Martin Dewhirst, Robert Conetta, Sunish George, University of Surrey - Guildford, Surrey, UK; Søren Beck, Bang & Olufsen a/s - Struer, Denmark; David Meares, DJM Consultancy - Sussex, UK
Most current perceptual models for audio quality have so far tended to concentrate on the audibility of distortions and noises that mainly affect the timbre of reproduced sound. The QESTRAL model, however, is specifically designed to take account of distortions in the spatial domain such as changes in source location, width, and envelopment. It is not aimed only at codec quality evaluation but at a wider range of spatial distortions that can arise in audio processing and reproduction systems. The model has been calibrated against a large database of listening tests designed to evaluate typical audio processes, comparing spatially degraded multichannel audio material against a reference. Using a range of relevant metrics and a sophisticated multivariate regression model, results are obtained that closely match those obtained in listening tests.
Convention Paper 7595 (Purchase now)

P16-2 QESTRAL (Part 2): Calibrating the QESTRAL Model Using Listening Test DataRobert Conetta, Francis Rumsey, Slawomir Zielinski, Phillip Jackson, Martin Dewhirst, University of Surrey - Guildford, Surrey, UK; Søren Beck, Bang & Olufsen a/s - Struer, Denmark; David Meares, DJM Consultancy - Sussex, UK; Sunish George, University of Surrey - Guildford, Surrey, UK
The QESTRAL model is a perceptual model that aims to predict changes to spatial quality of service between a reference system and an impaired version of the reference system. To achieve this, the model required calibration using perceptual data from human listeners. This paper describes the development, implementation, and outcomes of a series of listening experiments designed to investigate the spatial quality impairment of 40 processes. Assessments were made using a multi-stimulus test paradigm with a label-free scale, where only the scale polarity is indicated. The tests were performed at two listening positions, using experienced listeners. Results from these calibration experiments are presented. A preliminary study on the process of selecting of stimuli is also discussed.
Convention Paper 7596 (Purchase now)

P16-3 QESTRAL (Part 3): System and Metrics for Spatial Quality PredictionPhilip J. B. Jackson, Martin Dewhirst, Rob Conetta, Slawomir Zielinski, Francis Rumsey, University of Surrey - Guildford, Surrey, UK; David Meares, DJM Consultancy - Sussex, UK; Søren Bech, Bang & Olufsen A/S - Struer, Denmark; Sunish George, University of Surrey - Guildford, Surrey, UK
The QESTRAL project aims to develop an artificial listener for comparing the perceived quality of a spatial audio reproduction against a reference reproduction. This paper presents implementation details for simulating the acoustics of the listening environment and the listener's auditory processing. Acoustical modeling is used to calculate binaural signals and simulated microphone signals at the listening position, from which a number of metrics corresponding to different perceived spatial aspects of the reproduced sound field are calculated. These metrics are designed to describe attributes associated with location, width, and envelopment attributes of a spatial sound scene. Each provides a measure of the perceived spatial quality of the impaired reproduction compared to the reference reproduction. As validation, individual metrics from listening test signals are shown to match closely subjective results obtained, and can be used to predict spatial quality for arbitrary signals.
Convention Paper 7597 (Purchase now)

P16-4 QESTRAL (Part 4): Test Signals, Combining Metrics and the Prediction of Overall Spatial QualityMartin Dewhirst, Robert Conetta, Francis Rumsey, Philip Jackson, Slawomir Zielinski, Sunish George, University of Surrey - Guildford, Surrey, UK; Søren Beck, Bang & Olufsen A/S - Struer, Denmark; David Meares, DJM Consultancy - Sussex, UK
The QESTRAL project has developed an artificial listener that compares the perceived quality of a spatial audio reproduction to a reference reproduction. Test signals designed to identify distortions in both the foreground and background audio streams are created for both the reference and the impaired reproduction systems. Metrics are calculated from these test signals and are then combined using a regression model to give a measure of the overall perceived spatial quality of the impaired reproduction compared to the reference reproduction. The results of the model are shown to match closely the results obtained in listening tests. Consequently, the model can be used as an alternative to listening tests when evaluating the perceived spatial quality of a given reproduction system, thus saving time and expense.
Convention Paper 7598 (Purchase now)

P16-5 An Unintrusive Objective Model for Predicting the Sensation of Envelopment Arising from Surround Sound RecordingsSunish George, Slawomir Zielinski, Francis Rumsey, Robert Conetta, Martin Dewhirst, Philip Jackson, University of Surrey - Guildford, Surrey, UK; David Meares, DJM Consultancy - West Sussex, UK; Søren Bech, Bang & Olufsen A/S - Struer, Denmark
This paper describes the development of an unintrusive objective model, developed independently as a part of QESTRAL project, for predicting the sensation of envelopment arising from commercially available 5-channel surround sound recordings. The model was calibrated using subjective scores obtained from listening tests that used a grading scale defined by audible anchors. For predicting subjective scores, a number of features based on Inter-Aural Cross Correlation (IACC), Karhunen-Loeve Transform (KLT), and signal energy levels were extracted from recordings. The ridge regression technique was used to build the objective model, and a calibrated model was validated using a listening test scores database obtained from a different group of listeners, stimuli, and location. The initial results showed a high correlation between predicted and actual scores obtained from listening tests.
Convention Paper 7599 (Purchase now)


Saturday, October 4, 11:00 am — 1:00 pm

L8 - Good Mic Technique—It's Not Just for the Studio: Microphone Selection and Usage for Live Sound


Chair:
Dean Giavaras, Shure Incorporated - Niles, IL, USA
Panelists:
Richard Bataglia
Phil Garfinkel, Audix USA
Mark Gilbert
Dan Healy
Dave Rat, Rat Sound

Abstract:
While there are countless factors that contribute to a good sounding live event, selecting, placing, and using microphones well can make the difference between a pleasant event and a sonic nightmare. Every sound professional has their own approach to microphone technique. This live sound event will feature a panel of experts from microphone manufacturers and sound reinforcement providers who will discuss their tips, tricks, and experience for getting the job done right at the start of the signal path. We will address conventional and nonconventional techniques and share some interesting stories from the trenches hopefully giving everyone a few new ideas to try on their next event. Using good mic technique will ultimately give the live engineer more time and energy to concentrate on taming the rest of the signal chain and maybe even making it to catering!


Saturday, October 4, 11:00 am — 12:00 pm

T11 - [Canceled]



Saturday, October 4, 11:30 am — 1:00 pm

P17 - Loudspeakers—Part 2


P17-1 Accuracy Issues in Finite Element Simulation of LoudspeakersPatrick Macey, PACSYS Limited - Nottingham, UK
Finite element-based software for simulating loudspeakers has been around for some time but is being used more widely now, due to improved solver functionality, faster hardware, and improvements in links to CAD software and other preprocessing improvements. The analyst has choices to make in what techniques to employ, what approximations might be made, and how much detail to model.
Convention Paper 7600 (Purchase now)

P17-2 Nonlinear Loudspeaker Unit ModelingBo Rohde Pedersen, Aalborg University - Esbjerg, Denmark; Finn T. Agerkvist, Technical University Denmark - Lyngby, Denmark
Simulations of a 6½-inch loudspeaker unit are performed and compared with a displacement measurement. The nonlinear loudspeaker model is based on the major nonlinear functions and expanded with time-varying suspension behavior and flux modulation. The results are presented with FFT plots of three frequencies and different displacement levels. The model errors are discussed and analyzed including a test with a loudspeaker unit where the diaphragm is removed.
Convention Paper 7601 (Purchase now)

P17-3 An Optimized Pair-Wise Constant Power Panning Algorithm for Stable Lateral Sound Imagery in the 5.1 Reproduction SystemSungyoung Kim, Yamaha Corporation - Shizuoka, Japan, and McGill University, Montreal, Quebec, Canada; Masahiro Ikeda, Akio Takahashi, Yamaha Corporation - Shizuoka, Japan
Auditory image control in the 5.1 reproduction system has been a challenge due to the arrangement of loudspeakers, especially in the lateral region. To suppress typical artifacts in a pair-wise constant power algorithm, a new gain ratio between the Left and Left Surround channel has been experimentally determined. Listeners were asked to estimate the gain ratio between two loudspeakers for seven lateral positions so as to set the direction of the sound source. From these gain ratios, a polynomial function was derived in order to parametrically represent a gain ratio in an arbitrary direction. The result of validating the experiments showed that the new function produced stable auditory imagery in the lateral region.
Convention Paper 7602 (Purchase now)

P17-4 The Use of Delay Control for Stereophonic Audio Rendering Based on VBAPDongil Hyun, Tacksung Choi, Daehee Youn, Yonsei University - Seoul, Korea; Seokpil Lee, Broadcasting-Communication Convergence Research Center KETI - Seongnam, Korea; Youngcheol Park, Yonsei University - Wonju, Korea
This paper proposes a new panning method that can enhance the performance of the stereophonic audio rendering system based on VBAP. The proposed system introduces a delay control to enhance the performance of the VBAP. Sample delaying is used to reduce the energy cancellation due to out-of-phase. Preliminary simulations and measurements are conducted to verify the controllability of ILD by delay control between stereophonic loudspeakers. By simulating ILD by the delay control, spatial direction at frequencies where energy cancellation occurred could be perceived more stable than the conventional VBAP. The performance of the proposed system is also verified by a subjective listening test.
Convention Paper 7603 (Purchase now)

P17-5 Ambience Sound Recording Utilizing Dual MS (Mid-Side) Microphone Systems Based upon Frequency Dependent Spatial Cross Correlation (FSCC)—Part-2: Acquisition of On-Stage SoundsTeruo Muraoka Takahiro Miura, Tohru Ifukube, University of Tokyo - Tokyo, Japan
In musical sound recording, a forest of microphones is commonly observed. It is for good sound localization and favorable ambience, however, the forest is desired to be sparse for less laborious setting up and mixing. For this purpose the authors studied sound-image representation of stereophonic microphone arrangements utilizing Frequency Dependent Spatial Cross Correlation (FSCC), which is a cross correlation of two microphone’s outputs. The authors first examined FSCCs of typical microphone arrangements for acquisition of ambient sounds and concluded that the MS (Mid-Side) microphone system with setting directional azimuth at 132-degrees is the best. The authors also studied conditions of on-stage sounds acquisition and found that the FSCC of a co-axial type microphone takes the constant value of +1, which is advantageous for stable sound localization. Thus the authors further compared additional sound acquisition characteristics of the MS system (setting directional azimuth at 120-degrees) and the XY system. We found the former to be superior. Finally, the author proposed dual MS microphone systems. One is for on-stage sound acquisition set directional azimuth at 120-degrees and the other is for ambient sound acquisition set directional azimuth at 132-degrees.
Convention Paper 7604 (Purchase now)

P17-6 Ambisonic Loudspeaker ArraysEric Benjamin, Dolby Laboratories - San Francisco, CA, USA
The Ambisonic system is one of very few surround sound systems that offers the promise of reproducing full three-dimensional (periphonic) audio. It can be shown that arrays configured as regular polyhedra can allow the recreation of an accurate sound field at the center of the array. But the regular polyhedric shape can be impractical for real everyday usage because the requirement that the listener have his head located at the center of the array forces the location of the lower loudspeakers to be beneath the floor, or even the location of a loudspeaker directly beneath the listener. This is obviously impracticable, especially in domestic applications. Likewise, it is typically the case that the width of the array is larger than can be accommodated within the room boundaries. The infeasibility of such arrays is a primary reason why they have not been more widely deployed. The intent of this paper is to explore the efficacy of alternative array shapes for both horizontal and periphonic reproduction.
Convention Paper 7605 (Purchase now)

P17-7 Optimum Placement for Small Desktop/PC LoudspeakersVladimir Filevski, Audio Expert DOO - Skopje, Macedonia
A desktop/PC loudspeaker usually stands on a desk, so the direct sound from the loudspeaker interferes with the reflected sound from the desk. On the desk, a "perfect" loudspeaker with flat anechoic frequency response will not give a flat, but a comb-like resultant frequency response. Here is presented one simple and inexpensive solution to this problem—a small, conventional loudspeaker is placed on a holder. The holder is a horizontal pivoting telescopic arm that enables easy positioning of the loudspeaker. With one side, the arm is attached on the top corner of the PC monitor, and the other side is attached to the loudspeaker. The listener extends and rotates the arm in horizontal plane to such a position that no reflection from the desk or from the PC monitor reaches the listener, thus preserving the presumably flat anechoic frequency response of the loudspeaker.
Convention Paper 7606 (Purchase now)


Saturday, October 4, 1:00 pm — 2:30 pm

Career Workshop


Co-moderators:
Gary Gottlieb
Keith Hatschek

Abstract:
This interactive workshop has been programmed based on student member input. Topics and questions were submitted by various student sections, polling students for the most in-demand topics. The final chosen topics are focused on education and career development within the audio industry and a panel selected to best address the chosen topics. An online discussion based on this talk will continue on the forums at aes-sda.org, the official student website of the AES.


Saturday, October 4, 1:00 pm — 2:00 pm

Lunchtime Keynote:
Peter Gotcher
of Topspin Media


Abstract:
The Music Business Is Dead—Long Live the NEW Music Business!

Peter Gotcher will deliver a high-level view of the changing business models facing the music industry today. Gotcher will explain why it no longer works for artists to derive their income from record labels that provide a tiny share of high volume sales. He will also explore new revenue models that include multiple revenue streams for artists; the importance of getting rid of unproductive middlemen; and generating more revenue from fewer fans.


Saturday, October 4, 2:00 pm — 4:00 pm

TT14 - WAM Studios, San Francisco


Abstract:
Women's Audio Mission is a San Francisco-based, non-profit organization dedicated to the advancement of women in music production and the recording arts. In a field where women are chronically under-represented (less than 5%), WAM seeks to "change the face of sound" by providing hands-on training, experience, career counseling, and job placement to women and girls in media technology for music, radio, film, television, and the internet. WAM believes that women's mastery of music technology and inclusion in the production process will expand the vision and voice of media and popular culture.

Note: Maximum of 20 participants per tour.


Price: $30 (members), $40 (nonmembers)

Saturday, October 4, 2:30 pm — 6:30 pm

Recording Competition - Stereo


Abstract:
The Student Recording Competition is a highlight at each convention. A distinguished panel of judges participates in critiquing finalists of each category in an interactive presentation and discussion. Student members can submit stereo and surround recordings in the categories classical, jazz, folk/world music, and pop/rock. Meritorious awards will be presented at the closing Student Delegate Assembly Meeting on Sunday.


Saturday, October 4, 2:30 pm — 4:30 pm

B9 - Listener Fatigue and Longevity


Chair:
David Wilson, CEA
Panelists:
Sam Berkow, SIA Acoustics
Marvin Caesar, Aphex
James Johnston, Neural Audio Corp.
Ted Ruscitti, On-Air Research

Abstract:
This panel will discuss listener fatigue and its impact on listener retention. While listener fatigue is an issue of interest to broadcasters, it is also an issue of interest to telecommunications service providers, consumer electronics manufacturers, music producers, and others. Fatigued listeners to a broadcast program may tune out, while fatigued listeners to a cell phone conversation may switch to another carrier, and fatigued listeners to a portable media player may purchase another company’s product. The experts on this panel will discuss their research and experiences with listener fatigue and its impact on listener retention.


Saturday, October 4, 2:30 pm — 4:30 pm

T12 - Damping of the Room Low-Frequency Acoustics (Passive and Active)


Presenters:
Reza Kashani, University of Dayton - Dayton, OH, USA
Jim Wischmeyer, Modular Sound Systems, Inc. - Lake Barrington, IL, USA

Abstract:
As the result of its size and geometry, a room excessively amplifies sound at certain frequencies. This is the result of standing waves (acoustic resonances/modes) of the room. These are waves whose original oscillation is continuously reinforced by their own reflections. Rooms have many resonances, but only the low-frequency ones are discrete, distinct, unaffected by the sound absorbing material in the room, and accommodate most of the acoustic energy build-up in the room.


Saturday, October 4, 2:30 pm — 4:00 pm

P18 - Innovative Audio Applications


Chair: Cynthia Bruyns-Maxwell, University of California Berkeley - Berkeley, CA, USA

P18-1 An Audio Reproduction Grand Challenge: Design a System to Sonic Boom an Entire HouseVictor W. Sparrow, Steven L. Garrett, Pennsylvania State University - University Park, PA, USA
This paper describes an ongoing research study to design a simulation device that can accurately reproduce sonic booms over the outside surface of an entire house. Sonic booms and previous attempts to reproduce them will be reviewed. The authors will present some calculations that suggest that it will be very difficult to produce the required pressure amplitudes using conventional sound reinforcement electroacoustic technologies. However, an additional purpose is to make AES members aware of this research and to solicit feedback from attendees prior to a January 2009 down-selection activity for the design of an outdoor sonic boom simulation system.
Convention Paper 7607 (Purchase now)

P18-2 A Platform for Audiovisual Telepresence Using Model- and Data-Based Wave-Field SynthesisGregor Heinrich, Fraunhofer Institut für Graphische Datenverarbeitung (IGD) - Darmstadt, Germany, and vsonix GmbH, Darmstadt, Germany; Christoph Jung, Volker Hahn, Michael Leitner, Fraunhofer Institut für Graphische Datenverarbeitung (IGD) - Darmstadt, Germany
We present a platform for real-time transmission of immersive audiovisual impressions using model- and data-based audio wave-field analysis/synthesis and panoramic video capturing/projection. The audio sub-system focused on in this paper is based on circular cardioid microphone and weakly directional loudspeaker arrays. We report on both linear and circular setups that feed different wave-field synthesis systems. While we can report on perceptual results for the model-based wave-field synthesis prototypes with beamforming and supercardioid input, we present findings for the data-based approach derived using experimental simulations. This data-based wave-field analysis/synthesis (WFAS) approach uses a combination of cylindrical-harmonic decomposition of cardioid array signals and angular windowing to enforce causal propagation of the synthesized field. Specifically, our contributions include (1) the creation of a high-resolution reproduction environment that is omnidirectional in both the auditory and visual modality, as well as (2) a study of data-based WFAS for real-time holophonic reproduction with realistic microphone directivities.
Convention Paper 7608 (Purchase now)

P18-3 SMART-I2: “Spatial Multi-User Audio-Visual Real-Time Interactive Interface”Marc Rébillat, University of Paris Sud - Paris, France; Etienne Corteel, sonic emotion ag - Oberglatt, Switzerland; Brian F. Katz, University of Paris Sud - Paris, France
The SMART-I2 aims at creating a precise and coherent virtual environment by providing users both audio and visual accurate localization cues. It is known that, for audio rendering, Wave Field Synthesis, and for visual rendering, Tracked Stereoscopy, individually permit spatially high quality immersion within an extended space. The proposed system combines these two rendering approaches through the use of a large multi-actuator panel used as both a loudspeaker array and as a projection screen, considerably reducing audio-visual incoherencies. The system performance has been confirmed by an objective validation of the audio interface and a perceptual evaluation of the audio-visual rendering.
Convention Paper 7609 (Purchase now)


Saturday, October 4, 2:30 pm — 5:30 pm

P19 - Spatial Audio Processing


Chair: Agnieszka Roginska, New York University - New York, NY, USA

P19-1 Head-Related Transfer Functions Reconstruction from Sparse Measurements Considering a Priori Knowledge from Database Analysis: A Pattern Recognition ApproachPierre Guillon, Rozenn Nicol, Orange Labs - Lannion, France; Laurent Simon, Laboratoire d’Acoustique de l’Université du Maine - Le Mans, France
Individualized Head-Related Transfer Functions (HRTFs) are required to achieve high quality Virtual Auditory Spaces. This paper proposes to decrease the total number of measured directions in order to make acoustic measurements more comfortable. To overcome the limit of sparseness for which classical interpolation techniques fail to properly reconstruct HRTFs, additional knowledge has to be injected. Focusing on the spatial structure of HRTFs, the analysis of a large HRTF database enables to introduce spatial prototypes. After a pattern recognition process, these prototypes serve as a well-informed background for the reconstruction of any sparsely measured set of individual HRTFs. This technique shows better spatial fidelity than blind interpolation techniques.
Convention Paper 7610 (Purchase now)

P19-2 Near-Field Compensation for HRTF ProcessingDavid Romblom, Bryan Cook, Sennheiser DSP Research Lab - Palo Alto, CA, USA
It is difficult to present near-field virtual audio displays using available HRTF filters, as most existing databases are measured at a single distance in the far-field of the listener’s head. Measuring near-field data is possible, but would quickly become tiresome due to the large number of distances required to simulate sources moving close to the head. For applications requiring a compelling near-field virtual audio display, one could compensate the far-field HRTF filters with a scheme based on 1/r spreading roll off. However, this would not account for spectral differences that occur in the near-field. Using difference filters based on a spherical head model, as well as a geometrically accurate HRTF lookup scheme, we are able to compensate existing data and present a convincing virtual audio display for near field distances.
Convention Paper 7611 (Purchase now)

P19-3 A Method for Estimating Interaural Time Difference for Binaural SynthesisJuhan Nam, Jonathan S. Abel, Julius O. Smith III, Stanford University - Stanford, CA, USA
A method for estimating interaural time difference (ITD) from measured head-related transfer functions (HRTFs) is presented. The method forms ITD as the difference in left-ear and right-ear arrival times, estimated as the times of maximum cross-correlation between measured HRTFs and their minimum-phase counterparts. The arrival time estimate is related to a least-squares fit to the measured excess phase, emphasizing those frequencies having large HRTF magnitude and deweighting large phase delay errors. As HRTFs are nearly minimum-phase, this method is robust compared to the conventional approach of cross-correlating left-ear and right-ear HRTFs, which can be very different. The method also performs slightly better than techniques averaging phase delay over a limited frequency range.
Convention Paper 7612 (Purchase now)

P19-4 Efficient Delay Interpolation for Wave Field SynthesisAndreas Franck, Karlheinz Brandenburg, Fraunhofer Institute for Digital Media Technology - Ilmenau, Germany; Ulf Richter, Fraunhofer Institute for Digital Media Technology - Ilmenau, Germany, and HTWK Leipzig, Leipzig, Germany
Wave Field Synthesis enables the reproduction of complex auditory scenes and moving sound sources. Moving sound sources induce time-variant delay of source signals, To avoid severe distortions, sophisticated delay interpolation techniques must be applied. The typically large numbers of both virtual sources and loudspeakers in a WFS system result in a very high number of simultaneous delay operations, thus being a most performance-critical aspect in a WFS rendering system. In this paper we investigate suitable delay interpolation algorithms for WFS. To overcome the prohibitive computational cost induced by high-quality algorithms, we propose a computational structure that achieves a significant complexity reduction through a novel algorithm partitioning and efficient data reuse.
Convention Paper 7613 (Purchase now)

P19-5 Obtaining Binaural Room Impulse Responses from B-Format Impulse ResponsesFritz Menzer, Christof Faller, Ecole Polytechnique Fédérale de Lausanne - Lausanne, Switzerland
Given a set of head related transfer functions (HRTFs) and a room impulse response measured with a Soundfield microphone, the proposed technique computes binaural room impulse responses (BRIRs) that are similar to binaural room impulse responses that would be measured if, in place of the Soundfield microphone, the dummy head used for the HRTF set was directly recording the BRIRs. The proposed technique enables that from a set of HRTFs corresponding BRIRs for different rooms are obtained without a need for the dummy head or person to be present for measurement.
Convention Paper 7614 (Purchase now)

P19-6 A New Audio Postproduction Tool for Speech DereverberationKeisuke Kinoshita, Tomohiro Nakatani, Masato Miyoshi, NTT Communication Science Laboratories - Kyoto, Japan; Toshiyuki Kubota, NTT Media Lab. - Tokyo, Japan
This paper proposes a new audio postproduction tool for speech dereverberation utilizing our previously proposed method. In previous studies we have proposed the single-channel dereverberation method as a preprocessing of automatic speech recognition and reported good performance. This paper focuses more on the improvement of the audible quality of the dereverberated signals. To achieve good dereverberation with less audible artifacts, the previously proposed dereverberation method is combined with the post-processing that implicitly takes account of the perceptual masking property. The system has three adjustable parameters for controlling the audible quality. With an informal evaluation, we found that the proposed tool allows the professional audio engineers to dereverberate a set of reverberant recordings efficiently.
Convention Paper 7615 (Purchase now)


Saturday, October 4, 2:30 pm — 4:00 pm

P20 - Loudspeakers—Part 3


P20-1 Preliminary Results of Calculation of a Sound Field Distribution for the Design of a Sound Field Effector Using a 2-Way Loudspeaker Array with Pseudorandom ConfigurationYoshihiro Iijima, Musashi Institute of Technology - Tokyo, Japan; Kaoru Ashihara, Advanced Industrial Science and Technology - Tsukuba, Japan; Shogo Kiryu, Musashi Institute of Technology - Tokyo, Japan
We have been developing a loudspeaker array system that can control a sound field in real time for live concerts. In order to reduce the sidelobes and to improve the frequency range, a 2-way loudspeaker array with pseudorandom configuration is proposed. Software is being developed to determine the configuration. For now, the configuration is optimized for a focused sound. The software calculates the ratio between the sound pressure of the focus point and the average of the sound pressure around the focus. It was shown that the sidelobes can be reduced with a pseudorandom configuration.
Convention Paper 7616 (Purchase now)

P20-2 Design and Implementation of a Sound Field Effector Using a Loudspeaker ArraySeigo Hayashi, Tomoaki Tanno, Musashi Institute of Technology - Tokyo, Japan; Toru Kamekawa, Tokyo National University Arts and Music - Tokyo, Japan; Kaoru Ashihara, Advanced Industrial Science and Technology - Tokyo Japan; Shogo Kiryu, Musashi Institute of Technology - Tokyo, Japan
We have been developing an effector that uses a 128-channel two-way loudspeaker array system for live concerts. The system was designed to realize the change of the sound field within 10 ms. The variable delay circuits and the communication circuit between the hardware and the control computer are implemented in one FPGA. All of the delay data that have been calculated in advance are stored in the SDRAM that is mounted on the FPGA board, and only the simple command is sent from the control computer. The system can control up to four sound focuses independently.
Convention Paper 7617 (Purchase now)

P20-3 Wave Field Synthesis: Practical Implementation and Application to Sound Beam Digital PointingPaolo Peretti, Laura Romoli, Lorenzo Palestini, Stefania Cecchi, Francesco Piazza, Universita Politecnica delle Marche - Ancona, Italy
Wave Field Synthesis (WFS) is a digital signal processing technique introduced to achieve an optimal acoustic sensation in a larger area than in traditional systems (Stereophony, Dolby Digital). It is based on a large number of loudspeakers and its real-time implementation needs the study of efficient solutions in order to limit the computational cost. To this end, in this paper we propose an approach based on a preprocessing of the driving function component, which does not depend on the audio streaming. Linear and circular geometries tests will be described and the application of this technique to digital pointing of the sound beam will be presented.
Convention Paper 7618 (Purchase now)

P20-4 Highly Focused Sound Beamforming Algorithm Using Loudspeaker Array SystemYoomi Hur, Seong Woo Kim, Yonsei University - Seoul, Korea; Young-cheol Park, Yonsei University - Wonju, Korea; Dae Hee Youn, Yonsei University - Seoul, Korea
This paper presents a sound beamforming technique that can generate a highly focused sound beam using a loudspeaker array. For this purpose, we find the optimal weight that maximizes the contrast of sound power ratio between the target region and the other regions. However, there is a limitation to make the level of non-target region low with the directly derived weights, so the iterative pattern synthesis technique, which was introduced for antenna array, is investigated. Since it is assumed that there are imaginary signal powers in the non-target regions, the system makes efforts to further improve the contrast ratio iteratively. The performance of the proposed method was evaluated, and the results showed that it could generate highly focused sound beam than conventional method.
Convention Paper 7619 (Purchase now)

P20-5 Super-Directive Loudspeaker Array for the Generation of a Personal Sound ZoneJung-Woo Choi, Youngtae Kim, Sangchul Ko, Jung-Ho Kim, Samsung Electronics Co. Ltd. - Gyeonggi-do, Korea
A sound manipulation technique is proposed for selectively enhancing a desired acoustic property in a zone of interest called personal sound zone. In order to create the personal sound zone in which a listener can experience high sound level, acoustic energy is focused on only a selected area. Recently, two performance measures indicating acoustic properties of the personal sound zone—acoustic brightness and contrast—were employed to optimize driving functions of a loudspeaker array. In this paper first some limitations of individual control method are presented, and then a novel control strategy is suggested such that advantages of both are combined in a single objective function. Precise control of a sound field with desired shape of energy distribution is made possible by introducing a continuous spatial weighting technique. The results are compared to those based on the least-square optimization technique.
Convention Paper 7620 (Purchase now)


Saturday, October 4, 3:30 pm — 5:30 pm

Grammy SoundTable


Moderator:
Mike Clink, producer/engineer/entrepreneur (Guns N’ Roses, Sarah Kelly, Mötley Crüe)
Panelists:
Sylvia Massy, producer/engineer/entrepreneur (System of a Down, Johnny Cash, Econoline Crush)
Keith Olsen, producer/engineer/entrepreneur (Fleetwood Mac, Ozzy Osbourne, POGOLOGO Productions/MSR Acoustics)
Phil Ramone, producer/engineer/visionary (Elton John, Ray Charles, Shelby Lynne)
Carmen Rizzo, artist/producer/remixer (Seal, Paul Oakenfold, Coldplay)
John Vanderslice, artist/indie rock innovator/studio owner (MK Ultra, Mountain Goats, Spoon)

Abstract:
The 20th Annual GRAMMY Recording SoundTable is presented by the National Academy of Recording Arts & Sciences Inc. (NARAS) and hosted by AES.

YOU, Inc.! New Strategies for a New Economy

Today’s audio recording professional need only walk down the aisle of a Best Buy, turn on a TV, or listen to a cell phone ring to hear possibilities for new revenue streams and new applications to showcase their talents. From video games to live shows to ringbacks and 360 deals, money and opportunities are out there. It’s up to you to grab them.

For this special event the Producers & Engineers Wing has assembled an all-star cast of audio pros who’ll share their experiences and entrepreneurial expertise in creating opportunities in music and audio. You’ll laugh, you’ll cry, you’ll learn.


Saturday, October 4, 5:00 pm — 6:45 pm

T14 - Electric Guitar-The Science Behind the Ritual


Presenter:
Alex U. Case, University of Massachusetts - Lowell, MA, USA

Abstract:
It is an unwritten law that recording engineers’ approach the electric guitar amplifier with a Shure SM57, in close against the grille cloth, a bit off-center of the driver, and angled a little. These recording decisions serve us well, but do they really matter? What changes when you back the microphone away from the amp, move it off center of the driver, and change the angle? Alex Case, Sound Recording Technology professor to graduates and undergraduates at UMass Lowell breaks it down, with measurements and discussion of the variables that lead to punch, crunch, and other desirables in electric guitar tone.


Saturday, October 4, 6:00 pm — 7:00 pm

MIX Foundation 2008 TECnology Hall of Fame


Abstract:
Hosted by Mix Magazine Executive Editor/TECnology Hall of Fame director George Petersen.

Presented annually by the Mix Foundation for Excellence in Audio to honor significant, lasting contributions to the advancement of audio technology, this year's event will recognize fifteen audio innovations. "It is interesting to note how many of these products are still in daily use decades after their introduction," Petersen says. "These aren't simply museum pieces, but working tools. We're proud to recognize their significance to the industry."


Saturday, October 4, 8:00 pm — 9:00 pm

Organ Concert
by
Graham Blyth


Abstract:
The Cathedral of St. Mary of the Assumption

Organist Graham Blyth's concerts are a highlight of every AES convention at which he performs. This year's recital will be held at St. Mary's Cathedral, a modern structure with a panoramic view of San Francisco. The cathedral's Ruffatti organ was designed with the Baroque repertoire in mind, a fact Blyth will reflect with a strong Bach emphasis in the program. In honor of the centennial of his birth, music by French composer Olivier Messiaen also will be played.

Located just minutes from the Golden Gate Bridge, Downtown Financial District, Twin Peaks, and The Marina, St. Mary’s Cathedral is in the heart of the city at the top of Cathedral Hill. The Ruffatti Organ, built in 1971 by Fratelli Ruffatti of Padua, Italy, has been acclaimed as one of the finest in the world. It rises impressively from its soaring pedestal platform into a magnificent art form in its own right. It consists of 4842 pipes on 89 ranks and 69 stops.


Sunday, October 5, 9:00 am — 10:00 am

P16 Papers Demo


Abstract:
Playback session related to Paper Session 16 "Spatial Audio Quality" held Saturday, October 4 from 10:30 am to 1:00 pm.


Sunday, October 5, 9:00 am — 10:30 am

Design Competition


Abstract:
The design competition is a competition for audio projects developed by students at any university or recording school challenging students with an opportunity to showcase their technical skills. This is not for recording projects or theoretical papers, but rather design concepts and prototypes. Designs will be judged by a panel of industry experts in design and manufacturing. Multiple prizes will be awarded.


Sunday, October 5, 9:00 am — 11:00 am

M4 - Acoustics and Multiphysics Modeling


Presenter:
John Dunec, Comsol - Palo Alto, CA, USA

Abstract:
This Master Class covers acoustics and multiphysics modeling using Comsol. The Acoustics Module is specifically designed for those who work in classical acoustics with devices that produce, measure, and utilize acoustic waves. Application areas include the design of loudspeakers, microphones, hearing aides, noise control, sound barriers, mufflers, buildings, and performance spaces.


Sunday, October 5, 9:00 am — 11:30 am

P22 - Hearing Enhancement


Chair: Alan Seefeldt, Dolby Laboratories - San Francisco, CA, USA

P22-1 Assessing the Acoustic Performance and Potential Intelligibility of Assistive Audio Systems for the Hard of Hearing and Other UsersPeter Mapp, Peter Mapp Associates - Colchester, Essex, UK
Around 14% of the general population suffer from a noticeable degree of hearing loss and would benefit from some form of hearing assistance or deaf aid. Recent DDA legislation and requirements mean that many more hearing assistive systems are being installed—yet there is evidence to suggest that many of these systems fail to perform adequately and provide the benefit expected. There has also been a proliferation of classroom and lecture room “soundfield” systems, with much conflicting evidence as to their apparent effectiveness. This paper reports on the results of some trial acoustic performance testing of such systems. In particular the effects of system microphone type, distance, and location are shown to have a significant effect on the resultant performance. The potential of using the Sound Transmission Index (STI) and in particular STIPa, for carrying out installation surveys has been investigated and a number of practical problems are highlighted. The requirements for a suitable acoustic test source to mimic a human talker are discussed as is the need to the need to adequately assess the effects of both reverberation and noise. The findings discussed in the paper are also relevant to the installation and testing of boardroom and conference room telecommunication systems.
Convention Paper 7626 (Purchase now)

P22-2 Aging and Sound Perception: Desirable Characteristics of Entertainment Audio for the ElderlyHannes Müsch, Dolby Laboratories - San Francisco, CA, USA
During the last few years the research community has made substantial progress toward understanding how aging affects the way the ear and brain process sound. A review of the literature supports our experience as audio professionals that elderly listeners have preferences for the reproduction of entertainment audio that differ from those of young listeners. This presentation reviews the literature on aging and sound perception with a focus on speech. The review identifies desirable audio reproduction characteristics and discusses signal processing techniques to generate audio that is suited for elderly listeners.
Convention Paper 7627 (Purchase now)

P22-3 Speech Enhancement of Movie SoundChristian Uhle, Oliver Hellmuth, Jan Weigel, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Today, many people have problems understanding the speech content of a movie, e.g., due to hearing impairments. This paper describes a method for improving speech intelligibility of movie sound. Speech is detected by means of a pattern recognition method; the audio signal is then attenuated during periods where speech is absent. The speech signals are further processed by a spectral weighting method aiming at the suppression of the background noise. The spectral weights are computed by means of feature extraction and a neural network regression method. The output signal finally carries all relevant speech with reduced background noise allowing the listener to follow the plot of the movie more easily. Results of numerical evaluations and of listening tests are presented.
Convention Paper 7628 (Purchase now)

P22-4 An Investigation of Audio Balance for Elderly Listeners Using Loudness as the Main ParameterTomoyasu Komori, Toru Takagi, NHK Science and Technical Research Laboratories - Tokyo, Japan; Kohichi Kurozumi, NHK Engineering Service, Inc. - Tokyo, Japan; Kazuhiro Murakawa, Yamaki Electric Corporation - Tokyo, Japan
We have been studying the best sound balance for audibility for elderly listeners. We conducted subjective tests on the balance between narration and background sound using professional sound mixing engineers. The comparative loudness of narration to background sound was used to calculate appropriate respective levels for use in documentary programs. Monosyllabic intelligibility tests were then conducted in a noisy environment with both elderly and young people and a condition that complicates hearing for the elderly was identified. Assuming that the recruitment phenomenon and reduced ability to separate narration from background sound cause hearing problems for the elderly, we estimated appropriate loudness levels for them. We also constructed a prototype to assess the best audio balance for the elderly objectively.
Convention Paper 7629 (Purchase now)

P22-5 Estimating the Transfer Function from Air Conduction Recording to One’s Own HearingSook Young Won, Jonathan Berger, Stanford University - Stanford, CA, USA
It is well known that there is often a sense of disappointment when an individual hears a recording of his/her own voice. The perceptual disparity between the live and recorded sound of ones own voice can be explained scientifically as the result of the multiple paths via which our body transmits vibrations from the vocal cords to the auditory system during vocalization, as opposed to the single air-conducted path in hearing a playback of one’s own recorded voice. In this paper we aim to investigate the spectral characteristics of one’s own hearing as compared to an air-conducted recording. To accomplish this objective, we designed and conducted a perceptual experiment with a real-time filtering application.
Convention Paper 7630 (Purchase now)


Sunday, October 5, 9:00 am — 11:00 am

P23 - Audio DSP


Chair: Jon Boley, LSB Audio

P23-1 Determination and Correction of Individual Channel Time Offsets for Signals Involved in an Audio MixtureEnrique Perez Gonzalez, Joshua Reiss, Queen Mary University of London - London, UK
A method for reducing comb-filtering effects due to delay time differences between audio signals in sound mixer has been implemented. The method uses a multichannel cross-adaptive effect topology to automatically determine the minimal delay and polarity contributions required to optimize the sound mixture. The system uses real time, time domain transfer function measurements to determine and correct the individual channel offset for every signal involved in the audio mixture. The method has applications in live and recorded audio mixing where recording a single sound source with more than one signal path is required, for example when recording a drum set with multiple microphones. Results are reported that determine the effectiveness of the proposed method.
Convention Paper 7631 (Purchase now)

P23-2 STFT-Domain Estimation of Subband CorrelationsMichael M. Goodwin, Creative Advanced Technology Center - Scotts Valley, CA, USA
Various frequency-domain and subband audio processing algorithms for upmix, format conversion, spatial coding, and other applications have been described in the recent literature. Many of these algorithms rely on measures of the subband autocorrelations and cross-correlations of the input audio channels. In this paper we consider several approaches for estimating subband correlations based on a short-time Fourier transform representation of the input signals.
Convention Paper 7632 (Purchase now)

P23-3 Separation of Singing Voice from Music Accompaniment with Unvoiced Sounds Reconstruction for Monaural RecordingsChao-Ling Hsu, Jyh-Shing Roger Jang, National Tsing Hua University - Hsinchu, Taiwan; Te-Lu Tsai, Institute for Information Industry - Taipei, Taiwan
Separating singing voice from music accompaniment is an appealing but challenging problem, especially in the monaural case. One existing approach is based on computational audio scene analysis, which uses pitch as the cue to resynthesize the singing voice. However, the unvoiced parts of the singing voice are totally ignored since they have no pitch at all. This paper proposes a method to detect unvoiced parts of an input signal and to resynthesize them without using pitch information. The experimental result shows that the unvoiced parts can be reconstructed successfully with 3.28 dB signal-to-noise ratio higher than that achieved by the currently state-of-the-art method in the literature.
Convention Paper 7633 (Purchase now)

P23-4 Low Latency Convolution In One Dimension Via Two Dimensional Convolutions: An Intuitive ApproachJeffrey Hurchalla, Garritan Corp. - Orcas, WA, USA
This paper presents a class of algorithms that can be used to efficiently perform the running convolution of a digital signal with a finite impulse response. The impulse is uniformly partitioned and transformed into the frequency domain, changing the one dimensional convolution into a two dimensional convolution that can be efficiently solved with nested short length acyclic convolution algorithms applied in the frequency domain. The latency of the running convolution is the time needed to acquire a block of data equal in size to the uniform partition length.
Convention Paper 7634 (Purchase now)


Sunday, October 5, 11:00 am — 1:00 pm

W11 - Upcoming MPEG Standard for Efficient Parametric Coding and Rendering of Audio Objects


Chair:
Oliver Hellmuth, Fraunhofer Institute for Integrated Circuits IIS
Panelists:
Jonas Engdegård
Christof Faller
Jürgen Herre
Leon van de Kerkhof

Abstract:
Through exploiting the human perception of spatial sound, “Spatial Audio Coding” technology enabled new ways of low bit-rate audio coding for multichannel signals. Following the finalization of the MPEG Surround specification, ISO/MPEG launched a follow-up standardization activity for bit-rate-efficient and backward compatible coding of several sound objects. On the receiving side, such a Spatial Audio Object Coding (SAOC) system renders the objects interactively into a sound scene on a reproduction setup of choice. The workshop reviews the ideas, principles, and prominent applications behind Spatial Audio Object Coding and reports on the status of the ongoing ISO/MPEG Audio standardization activities in this field. The benefits of the new approach will be highlighted and illustrated by means of real-time demonstrations.


Sunday, October 5, 11:00 am — 1:00 pm

L11 - Loudspeaker System Optimization


Chair:
Bruce C. Olson, Olson Sound Design - Minneapolis, MN, USA
Panelists:
Ralph Heinz, Renkus Heinz
TBA

Abstract:
The panel will discuss recommended ways to optimize loudspeaker systems for use in the typical venues frequented by local bands and regional sound companies. These industry experts will give you practical advice on getting your system to sound good in the usual setup time that is typically available. OK, maybe not typical, we assume you can get more than 5 minutes for the task. Once the system is optimized properly, all you have to do is make the band sound good. This advice is targeted at all the band engineers, as well as system tech’s for small sound companies, maybe even some of you big guys as well.


Sunday, October 5, 11:30 am — 1:00 pm

Platinum Road Warriors


Moderator:
Clive Young
Panelists:
Eddie Mapp
Paul “Pappy” Middleton
Howard Page

Abstract:
An all-star panel of leading front-of-house engineers will explore subject matter ranging from gear to gossip, in what promises to be an insightful, amusing, and enlightening 90 minute session. Engineers for superstar artists will discuss war stories, technical innovations, and heroic efforts to maintain the eternal “show must go on” code of the road. Ample time will be provided for an audience Q&A session.


Sunday, October 5, 11:30 am — 1:00 pm

T16 - Latest Advances in Ceramic Loudspeakers and Their Drivers


Presenters:
Mark Cherry, Maxim, Inc. - Sunnyvale, CA, USA
Robert Polleros, Maxim Integrated Products - Austria
Peter Tiller, Murata - Atlanta, GA, USA

Abstract:
New cell phone designs demand small form factor while maintaining audio sound-pressure level. Speakers have typically been the component that limits the thinness of the design. New developments in ceramic, or piezoelectric, loudspeakers have opened the door for new sleek designs. Due to the capacitive nature of these ceramic speakers, special considerations need to be taken into account when choosing an audio amplifier to drive them. Today’s portable devices need smaller, thinner, more power-efficient electronic components. Cellular phones have become so thin that the dynamic speaker is typically the limiting factor in how thin manufacturers can make their handsets. The ceramic, or piezoelectric, speaker is quickly emerging as a viable alternative to the dynamic speaker. These ceramic speakers can deliver competitive sound-pressure levels (SPL) in a thin and compact package, thus potentially replacing traditional voice-coil dynamic speakers.


Sunday, October 5, 2:30 pm — 5:00 pm

L12 - Innovations in Live Sound—A Historical Perspective


Chair:
Ted Leamy, Pro Media | UltraSound
Panelists:
Graham Blyth, Soundcraft
Ken Lopez, University of Southern California
John Meyer, Meyer Sound

Abstract:
New techniques and products are often driven by changes in need and available technology. Today’s sound professional has a myriad of products to choose from. That wasn’t always the case. What drove the creation of today’s products? What will drive the products of tomorrow? Sometimes a look back is the best way to get a peek ahead. A panel of industry pioneers and trailblazers will take a look back at past live sound innovations with an emphasis on the needs and constraints that drove their development and adoption.


Sunday, October 5, 2:30 pm — 4:00 pm

W14 - Navigating the Technology Mine Field in Game Audio


Chair:
Marc Schaefgen, Midway Games
Panelists:
Rich Carle, Midway Games
Clark Crawford, Midway Games
Kristoffer Larson, Midway Games

Abstract:
In the early days of game audio systems, tools and assets were all developed and produced in-house. The growth of the games industry has resulted in larger audio teams with separate groups dedicated to technology or content creation. The breadth of game genres and number of “specialisms” required to create the technology and content for a game have mandated that developers look out of house for some of their game audio needs.

In this workshop the panel discusses the changing needs of the game-audio industry and the models that a studio typically uses to produce the audio for a game. The panel consists of a number of audio directors from different first-party studios owned by Midway games. They will examine the current middleware market and explain how various tools are used by their studios in the audio production chain. The panel also explains how out-of-house musicians or sound designers are outsourced as part of the production process.


Sunday, October 5, 2:30 pm — 4:30 pm

T17 - An Introduction to Digital Pulse Width Modulation for Audio Amplification


Presenter:
Pallab Midya, Freescale Semiconductor Inc. - Austin, TX, USA

Abstract:
Digital PWM is highly suitable for audio amplification. Digital audio sources can be readily converted to digital PWM using digital signal processing. The mathematical nonlinearity associated with PWM can be corrected with extremely high accuracy. Natural sampling and other techniques will be discussed that convert a PCM signal to a digital PWM signal. Due to limitations of digital clock speeds and jitter, the duty ratio of the PWM signal has to be quantized to a small number of bits. The noise due to quantization can be effectively shaped to fall outside the audio band. PWM specific noise shaping techniques will be explained in detail. Further, there is a need for sample rate conversion for a digital PWM modulator to work with a digital PCM signal that is generated using a different clock. The mathematics of an asynchronous sample rate converters will also be discussed. Digital PWM signals are amplified by a power stage that introduces nonlinearity and mixes in noise from the power supply. This mechanism will be examined and ways to correct for it will be discussed.