AES Munich 2009
Home Visitors Exhibitors Press Students Authors
Visitors
Technical Program
Recording Industry

Detailed Calendar

Paper Sessions

Workshops

Tutorials

Live Sound Seminars

Exhibitor Seminars

Special Events

Student Program

Technical Tours

Technical Council

Standards Committee

Heyser Lecture











AES Munich 2009
Recording Industry Event Details

Thursday, May 7, 09:00 — 11:30

P1 - Audio for Telecommunications


Chair: Damian Murphy

P1-1 20 Things You Should Know Before Migrating Your Audio Network to IPSimon Daniels, APT - Belfast, Northern Ireland, UK
For many years, synchronous networks have been considered the industry standard for audio transport worldwide. Balanced analog copper circuits, microwave, and synchronous based systems such as V.35/X.21 or T1/E1 have been the traditional choice for studio transmitter and inter-studio links in professional audio broadcast networks. Readily available from all major service providers, the popularity of synchronous links has been largely due to the fact that they offer dedicated, reliable, point-to-point and bi-directional communication at guaranteed data and error rates. However, the reign of synchronous links as the preferred choice for STLs is currently coming under threat from a new challenger, in the form of IP-based network technology.
Convention Paper 7651 (Purchase now)

P1-2 Deploying Large Scale Audio IP NetworksKevin Campbell, APT - Belfast, Northern Ireland, UK
This paper will examine the key considerations for those interested in deploying large-scale ip audio networks. It will include an overview of the main challenges and draw on the experience of national public broadcasters who have already migrated to IP. We will provide an overview of the key concerns such as jitter, delay, and link reliability that are valid for an IP network of any size. However, this paper will focus mainly on the issues arising from the greater complexity and scale of large national and country-wide deployments. The paper will use illustrations and network applications from real-world deployments to illustrate the points. Paper presented by Hartmut Foerster
Convention Paper 7652 (Purchase now)

P1-3 A Spatial Filtering Approach for Directional Audio CodingMarkus Kallinger, Henning Ochsenfeld, Giovanni Del Galdo, Fabian Kuech, Dirk Mahne, Richard Schultz-Amling, Oliver Thiergart, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
In hands-free telephony, spatial filtering techniques are employed to enhance intelligibility of speech. More precisely, these techniques aim at reducing the reverberation of the desired speech signal and attenuating interferences. Additionally, it is well-known that the spatially separate reproduction of desired and interfering sources enhance intelligibility of speech. For the latter task, Directional Audio Coding (DirAC) has proven to be an efficient method to capture and reproduce spatial sound. In this paper we propose a spatial filtering processing block, which works in the parameter domain of DirAC. Simulation results show that compared to a standard beamformer the novel technique offers significantly higher interference attenuation, while introducing comparably low distortion of the desired signal. Additional subjective tests of speech intelligibility confirm the instrumentally obtained results.
Convention Paper 7653 (Purchase now)

P1-4 A New Bandwidth Extension for Audio Signals without Using Side-InformationKha Le Dinh, Chon Tam Le Dinh, Roch Lefebvre, Université de Sherbrooke - Sherbrooke, Quebec, Canada
The use of narrow bandwidth (300 – 3400 Hz) in the current telephone network limits the perceptual quality of telephone conversations. Changing to wideband network is a solution that can help to improve quality, but it will need a long time to upgrade. Thus, bandwidth extension can be seen as an alternative solution during the transition time. A new bandwidth extension method is presented in this paper. Without using any side-information, the proposed method can be applied as a post-processing step at the terminal devices, maintaining the compatibility to the current telephone network, and thus, no modification is needed in the network nodes. Experimental results show that the proposed solution can help to improve significantly the perceptual quality of narrowband telephone signal.
Convention Paper 7654 (Purchase now)

P1-5 Feature Selection vs. Feature Space Transformation in Music Genre Classification FrameworkHanna Lukashevich, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany
Automatic classification of music genres is an important task in music information retrieval research. Nearly all state-of-the-art music genre recognition systems start from the feature extraction block. The extracted acoustical features often could tend to be correlated or/and redundant, which can cause various difficulties in the classification stage. In this paper we present a comparative analysis on applying supervised Feature Selection (FS) and Feature Space Transformation (FST) algorithms to reduce the feature dimensionality. We discuss pros and cons of the methods and weigh the benefits of each one against the others.
Convention Paper 7655 (Purchase now)


Thursday, May 7, 09:00 — 11:00

W1 - Lies, Damn Lies, and Statistics


Chair:
Sylvain Choisel, Philips Consumer Lifestyle - Leuven, Belgium
Panelists:
Thomas Sporer, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany
Florian Wickelmaier, University of Tübingen - Tübingen, Germany

Abstract:
Listening tests have become an important part of the development of audio systems (CODECS, loudspeakers, etc.). Unfortunately, even the simplest statistics (mean and standard deviation) are often misused.

This workshop will start with a basic introduction to statistics, but room will be given to discuss the pertinence of some commonly-used tests, and alternative methods will be proposed, thereby making it interesting for more experienced statisticians as well. The following topics will be covered (among others): experimental design, distributions, hypothesis testing, confidence intervals, analysis of paired comparisons and ranking data, and common pitfalls potentially leading to wrong conclusions.


Thursday, May 7, 09:00 — 11:00

T1 - A New Downmix Algorithm Optimizing Comb Filter Performance


Presenter:
Jörg Deigmöller, IRT Munich

Abstract:
Downmixing 5.1 surround sound to 2.0 stereo is a necessity in current challenging production environments. Straightforward downmix coefficients, as simple as they are to execute, result in comb filtering effects dependent on the correlation of the signals that are downmixed. A new algorithm is presented that tackles this issue with optimized decorrelation pre-processing, leading to much improved timbre of the downmixed signal.


Thursday, May 7, 09:00 — 11:30

P2 - Audio for Games and Interactive Media


Chair: Michael Kelly

P2-1 Viable Distribution of Multichannel Audio-over-IP for Live and Interactive “Voice Talent”-Based Gaming Using High-Quality, Low-Latency Audio Codec TechnologyGregory Massey, APT - Belfast, Northern Ireland, UK
The delivery of multichannel audio—from mono to surround sound—in real-time over public IP networks for the purpose of interactive crowd-participant gaming presents a significant design engineering challenge to games developers, console manufacturers, ISPs, and CDNs. Leveraging expertise gained in professional broadcasting and recording studio postproduction, APT has developed a robust and scalable audio codec technology that meshes with popular gaming systems to realize low-latency distribution of high-quality audio for immersive, instantaneous audio experiences in massively multi-player online games involving interactive audience responses to vocal/singing talent. Paper will be presented by David Trainor.
Convention Paper 7656 (Purchase now)

P2-2 Elevator: Emotional Tracking Using Audio/Visual InteractionBasileios Psarras, Andreas Floros, Marianna Strapatsakis, Ionian University - Corfu, Greece
The research interest on modeling everyday human emotions and controlling them through typical multimedia content (i.e., audio and video data) has recently increased. In this paper an interactive methodology is introduced for detecting, controlling, and tracking emotions. Based on the above methodology, an interactive audiovisual installation termed “Elevator” was realized, aiming to analyze and manipulate simple emotions of the participants (such as anger) using simplified emotion detection audio signal processing techniques and specifically selected combined audio/visual content. As a result, the human emotions are “elevated” to pre-defined levels and appropriately mapped to visual content, which corresponds to the emotional “thumbnail” of the participants.
Convention Paper 7657 (Purchase now)

P2-3 Applications of Bending Wave Technology in Human Interface DevicesNeil Harris, New Transducers Ltd. (NXT) - Cambridge, UK
The application of bending-waves to so-called “flat panel loudspeakers” has often been the topic of papers at AES Conventions. This paper looks at other interesting applications of the technology that have, or are beginning to have commercial pull. These applications are also part of the interface between human and machine, but focus on the sense of touch rather than of hearing. The idea of a touch screen is not a new but is only now becoming ubiquitous with a new generation of devices, typified by the i-Phone. If touch sensors are the analog of the microphone, then haptic feedback generators are the analog of the loudspeaker. Bending waves are beginning to find application here too.
Convention Paper 7658 (Purchase now)

P2-4 Designing Auditory Display Menu Interfaces—Cues for Users' Current Location in Extensive MenusErik Sikström; Jan Berg, Luleå University of Technology - Luleå, Sweden
This paper reviews the current research in auditory display in search for design guidelines for presenting the contents in audio-only menu interfaces. The aim of the review is to find new directions for auditory display menu interface design. Among several techniques for representing individual menu items the preliminary results show that the spearcon seems to be the most suitable method. For the layout of menu items, studies have shown that spatial separation, different timbres, and staggering onset between the items improves recognition rates, particularly for concurrently presented items. A remaining issue to be investigated is how to remind the user of her current location in the menus of extensive menu interfaces.
Convention Paper 7659 (Purchase now)

P2-5 Symmetry Model-Based Key FindingMarkus Mehnert, Technische Universität Ilmenau - Ilmenau, Germany; Gabriel Gatzsche, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Daniel Arndt, Fraunhofer IIS - Ilmenau, Germany
In this paper we introduce a new key finding algorithm that is based on the symmetry model introduced by Gatzsche et al. The algorithm consists of two parts. First, the most probable diatonic pitch class set of the musical piece is recognized. Second, using one of the subspaces of the symmetry model the mode of the piece is estimated. The algorithm is evaluated with 100 Beatles songs, 90 newer “Pop and Rock” songs, and 252 classical pieces from the Naxos database. The results will be compared to the algorithms of Lerch, Zhu et al., and an algorithm based on binary major and minor chord profiles. The new algorithm has the highest overall key finding MIREX’05 score of 82.9 percent.
Convention Paper 7660 (Purchase now)


Thursday, May 7, 09:00 — 11:00

T2 - An Introduction to Digital Audio Effects


Chair:
Christoph M. Musialik, Algorithmix GmbH
Panelists:
Joshua D. Reiss, Queen Mary, University of London - London, UK
Udo Zölzer, Helmut-Schmidt-Universität - Hamburg, Germany

Abstract:
In this tutorial we discuss the ways by which signal processing techniques are used to produce effects acting on digital audio signals. The audio effects are systematically classified and discussed, with emphasis on how and why they are used. Practical examples of common effects are provided, along with block diagrams, pseudo-code, and sound examples. During the tutorial, a few effects will be created from scratch and the audience will be provided with the basic background knowledge to design their own effects.


Thursday, May 7, 10:00 — 11:00

Archiving, Restoration, and Digital Libraries



Thursday, May 7, 10:00 — 11:30

P3 - Recording, Reproduction, and Delivery


P3-1 Audio Content Annotation, Description, and Management Using Joint Audio Detection, Segmentation, and Classification TechniquesChristos Vegiris, Charalambos Dimoulas, George Papanikolaou, Aristotle University of Thessaloniki - Thessaloniki, Greece
The current paper focuses on audio content management by means of joint audio segmentation and classification. We concentrate on the separation of typical audio classes, such as silence/background noise, speech, crowded speech, music, and their combinations. A compact feature-vector subset is selected by a Correlation feature selection subset evaluation algorithm after the use of EM clustering algorithm on an initial audio data set. Time and spectral parameters are extracted using filter-banks and wavelets in combination with sliding windows and exponential moving averaging techniques. Features are extracted on a point-to-point basis, using the finest possible time resolution, so that each sample can be individually classified to one of the available groups. Clustering algorithms like EM or Simple K-means are tested to evaluate the final point-to-point classification result, therefore the joint audio detection-classification indexes. The extracted audio detection, segmentation, and classification results can be incorporated into appropriate description schemes that would annotate audio events/segments for content description and management purposes.
Convention Paper 7661 (Purchase now)

P3-2 Ambience Sound Recording Utilizing Dual MS (Mid-Side) Microphone Systems Based upon Frequency Dependent Spatial Cross Correlation (FSCC) [Part 3: Consideration of Microphones’ Locations]Teruo Muraoka, Takahiro Miura, Tohru Ifukube, University of Tokyo - Tokyo, Japan
In order to achieve ambient and exactly sound-localized musical recording with fewer number of microphones, we studied sound acquisition performances of microphone arrangements utilizing their Frequency Dependent Spatial Cross Correlation (FSCC). The result is that an MS microphone is best for this purpose. The setting of the microphone's directional azimuth at 132 degrees is the best for ambient sound acquisition and setting of that at 120 degrees is best for on-stage sound acquisition. We conducted actual concert recordings with a combination of those MS microphones (Dual MS microphone systems) and obtained satisfactory results. Successively, we studied the proper setting positions of those microphones. For ambient sound acquisition, suspending the microphone at the center of a concert hall is favorable, and for on-stage sound acquisition, locating it at almost above the conductor’s position will also be satisfactory. Process of the studies will be reported.
Convention Paper 7662 (Purchase now)

P3-3 A Comparative Approach to Sound Localization within a 3-D Sound FieldMartin J. Morrell, Joshua D. Reiss, Queen Mary, University of London - London, UK
In this paper we compare different methods for sound localization around and within a 3-D sound field. The first objective is to determine which form of panning is consistently preferred for panning sources around the loudspeaker array. The second objective and main focus of the paper is localizing sources within the loudspeaker array. We seek to determine if the sound sources can be located without movement or a secondary reference source. The authors compare various techniques based on ambisonics, vector base amplitude panning and time delay based panning. We report on subjective listening tests that show which method of panning is preferred by listeners and rate the success of panning within a 3-D loudspeaker array.
Convention Paper 7663 (Purchase now)

P3-4 The Effect of Listening Room on Audio Quality in Ambisonics ReproductionOlli Santala, Helsinki University of Technology - Espoo, Finland; Heikki Vertanen, Helsinki University of Technology - Espoo, Finland, University of Helsinki, Helsinki, Finland; Jussi Pekonen, Jan Oksanen, Ville Pulkki, Helsinki University of Technology - Espoo, Finland
In multichannel reproduction of spatial audio with first-order Ambisonics the loudspeaker signals are relatively coherent, which produces prominent coloration. The coloration artifacts have been suggested to depend on the acoustics of the listening room. This dependency was researched with subjective listening tests in an anechoic chamber with an octagonal loudspeaker setup. Different virtual listening rooms were created by adding diffuse reverberation with 0.25 seconds RT60 using a 3-D 16-channel loudspeaker setup. In the test, the subjects compared the audio quality in the virtual rooms. The results suggest that optimal audio quality was obtained when the virtual room effect and the direct sound were on equal level at the listening position.
Convention Paper 7664 (Purchase now)

P3-5 Ontology-Based Information Management in Music ProductionGyorgy Fazekas, Mark Sandler, Queen Mary, University of London - London, UK
In information management, ontologies are used for defining concepts and relationships of a domain in question. The use of a schema permits structuring, interoperability, and automatic interpretation of data, thus allows accessing information by means of complex queries. In this paper we use ontologies to associate metadata, captured during music production, with explicit semantics. The collected data is used for finding audio clips processed in a particular way, for instance, using engineering procedures or acoustic signal features. As opposed to existing metadata standards, our system builds on the Resource Description Framework, the data model of the Semantic Web, which provides flexible and open-ended knowledge representation. Using this model, we demonstrate a framework for managing information, relevant in music production.
Convention Paper 7665 (Purchase now)


Thursday, May 7, 11:00 — 12:00

Coding of Audio Signals



Thursday, May 7, 12:00 — 13:30

Opening Ceremonies
Awards
Keynote Speech


Abstract:
Awards Presentation
Please join us as the AES presents special awards to those who have made outstanding contributions to the Society in such areas of research, scholarship, and publications, as well as other accomplishments that have contributed to the
enhancement of our industry. The awardees are:
Bronze Medal Award:
• Ivan Stamac
Fellowship Award:
• Martin Wöhr
Board of Governors Award:
• Jan Berg
• Klaus Blasquiz
• Kimio Hamasaki
• Shinji Koyano
• Tapio Lokki
• Jiri Ocenasek
• John Oh
• Jan Abildgaard Pedersen
• Joshua Reiss

Keynote Speaker

This year’s Keynote Speaker is Gerhard Thoma. Thoma has been leading the department of acoustics projects at BMW for more than 20 years. His speech will highlight many aspects of perception and acoustics from an unusual point of view: What does a driver in a car need to hear, what does he should not hear, and how can the acoustics and sounds of a car help to significantly enhance driving pleasure and safety?


Thursday, May 7, 13:30 — 15:30

W2 - New Technologies for Audio Over IP


Chair:
Jeremy Cooperstock, McGill University - Montreal, Quebec, Canada
Panelists:
Steve Church, Telos
Christian Diehl, Mayah
Manfred Lutzky, Fraunhofer IIS
Greg Massey, APT

Abstract:
This workshop is intended to provide an "under the hood" discussion of various low-latency codecs as well as a comparison of their pros and cons for different applications. Codecs including AAC-ELD and ULD will be discussed, along with techniques such as adaptive jitter buffer management.


Thursday, May 7, 14:00 — 15:00

Fiber Optics for Audio (Formative Meeting)


Abstract:
Formative Meeting


Thursday, May 7, 14:00 — 16:00

W3 - Intelligent Digital Audio Effects


Chair:
Christian Uhle, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Panelists:
Alexander Lerch, Z-Plane - Berlin, Germany
Josh Reiss, Queen Mary, University of London - London, UK
Udo Zölzer, Helmut-Schmidt-Universität - Hamburg, Germany

Abstract:
Intelligent Digital Audio Effects (I-DAFx) process audio signals in a signal-adaptive way by using some kind of high-level analysis of the input. Beat tracking, for example, enables the automated adaption of time delays or of the LFO rate in tremolos, auto-wahs, and vibrato effects. A harmonizer can adapt the additional intervals to the melody that is played. Automatic mixing is approached by analyzing the signal content in all channels to control the panning. These are examples of techniques that are in the scope of this workshop. It presents an overview of I-DAFx and of methods of semantic audio analysis used in these devices. Practical examples are described and sound examples are demonstrated.


Thursday, May 7, 14:00 — 18:30

P4 - Recording, Reproduction, and Delivery


Chair: Joerg Wuttke

Siegfried Linkwitz, Linkwitz Lab

P4-1 An Expert in Absentia: A Case-Study for Using Technology to Support Recording Studio PracticeAndrew King, University of Hull - Scarborough, North Yorkshire, UK
This paper examines the use of a Learning Technology Interface (LTI) to support the completion of a recording workbook with audio examples over a ten-week period. The VLE provided contingent support to studio users for technical problems encountered in the completion of four recording tasks. Previous research has investigated how students collaborate and problem-solve during a short session in the recording studio using technology as a contingent support tool. In addition, online message boards have been used to record problems encountered when completing a prescribed task (critical-incident recording). A mixed-methods case study approach was used in this study. The students interactions within the LTI were logged (i.e., frequency, time, duration, type of support) and their feedback was elicited via a user questionnaire at the end of the project. Data for this study demonstrates that learning technology can be a successful support tool and also highlights the frequency and themes concerning the types of recording practice information accessed by the learners.
Convention Paper 7669 (Purchase now)

P4-2 Recording and Reproduction over Two Loudspeakers as Heard Live—Part 1: Hearing, Loudspeakers, and RoomsSiegfried Linkwitz, Linkwitz Lab - Corte Madera, CA, USA; Don Barringer, Linkwitz Lab - Arlington, CA, USA
Innate hearing processes define the realism that can be obtained from reproduced sound. An unspecified system with two loudspeakers in a room places considerable limitations upon the degree of auditory realism that can be obtained. It has been observed that loudspeakers and room must be hidden from the auditory scene that is evoked in the listener’s brain. Requirements upon the polar response and the output volume capability of the loudspeaker will be discussed. Problems and solutions in designing a three-way, open baffle loudspeaker with piston drivers will be presented. Loudspeakers and listener must be symmetrically placed in the room to minimize the effects of reflections upon the auditory illusion.
Convention Paper 7670 (Purchase now)

P4-3 Recording and Reproduction over Two Loudspeakers as Heard Live—Part 2: Recording Concepts and PracticesDon Barringer, Linkwitz Lab - Arlington, VA, USA; Siegfried Linkwitz, Linkwitz Lab - Corte Madera, CA, USA
For a half century, the crucial interaction between recording engineer and monitor loudspeakers during two-channel stereophonic recording has not been resolved, leaving the engineer to cope with uncertainties. However, recent advances in defining and improving this loudspeaker-room-listener interface have finally allowed objectivity to inform and shape the engineer’s choices. The full potential of the two-channel format is now accessible to the recording engineer, and in a room that is just as normal as most consumers’ rooms. The improved reproduction has also allowed a deeper understanding of the merits and limits of spaced and coincident/near-coincident microphone arrays. As a result of these and earlier observations, a four-microphone array was conceived that exploits natural hearing processes to achieve greater auditory realism from two loudspeakers. A number of insights have emerged from the experiments.
Convention Paper 7671 (Purchase now)

P4-4 Vision and Technique behind the New Studios and Listening Rooms of the Fraunhofer IIS Audio LaboratoryAndreas Silzle, Stefan Geyersberger, Gerd Brohasga, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Dieter Weninger, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany, Innovationszentrum für Telekommunikationstechnik GmbH IZT, Erlangen, Germany; Michael Leistner, Fraunhofer Institute for Building Physics IBP - Stuttgart, Germany
The new audio laboratory rooms of the Fraunhofer IIS and their technical design are presented here. The vision behind them is driven by the very high demands of a leading edge audio research organization with more than 100 scientists and engineers. The 300 m2 sound studio complex was designed with the intention of providing capabilities that are in combination far more extensive than those available in common audio research or production facilities. The reproduction room for listening tests follows the strict recommendations of ITU-R BS 1116. The results of the qualification measurements regarding direct sound, reflected sound, and steady state sound field will be shown and the construction efforts needed to achieve these values are explained. The connection from all the computers in the server room to more than 70 loudspeakers in the reproduction rooms, other audio interfaces, and the projection screens is done by an audio and video routing system. The architecture of the advanced control software of this routing system is presented. It allows easy and flexible access for each class of user to all the possibilities made available by this completely new system.
Convention Paper 7672 (Purchase now)

P4-5 Advances in National Broadcaster Networks: Exploring Transparent High Definition IPTVMatthew O’Donnell, British Sky Broadcasting - Upminster, UK
British commercial broadcasters are increasing their ability to determine the quality of distribution of audio-over-IP by acquiring and installing next generation national Gigabit networks. This paper explores how broadcasters can use the advances in broadband technology to transparently integrate supplemental on-demand IPTV services with traditional broadcasting transport, which has led to broadcasters being confident in achieving scalable carrier-class quality of service for delivery of high definition media direct to the customer’s set top box.
Convention Paper 7673 (Purchase now)

P4-6 Multi-Perspective Surround Sound Audio RecordingMark J. Sarisky, The University of Texas at Austin - Austin, TX, USA
With the advent of Blu-Ray Disc Audio (BD-Audio), high resolution uncompressed audio recordings can be presented as a consumer product in a variety of surround sound formats. This paper proposes a new take on the recording of live and studio music in surround sound that allows the consumer to benefit from the large capacity of the BD-Audio disc and enjoy the recording from multiple listening perspectives.
Convention Paper 7674 (Purchase now)

P4-7 Sound Intensity-Based Three-Dimensional PanningAkio Ando, Kimio Hamasaki, NHK Science and Technical Research Laboratories - Setagaya, Tokyo, Japan
Three-dimensional (3-D) panning equipment is essential for the production of 3-D audio content. We have already proposed an algorithm to enable such panning. It generates the input signal to be fed into multichannel loudspeakers so as to realize the same physical properties of sound at the receiving point as those created by a single loudspeaker model of the virtual source. A sound pressure vector is used as the physical property. This paper proposes a new method that uses sound intensity instead of the sound pressure vector and shows that both conventional “vector base amplitude panning” and our previous method come very close to achieving coincidence of sound intensity. A new panning method using four loudspeakers is also proposed.
Convention Paper 7675 (Purchase now)

P4-8 A Practical Comparison of Three Tetrahedral Ambisonic MicrophonesDan Hemingson, Mark Sarisky, The University of Texas at Austin - Austin, TX, USA
This paper compares two low-cost tetrahedral ambisonic microphones, an experimental microphone, and a Core Sound TetraMic with a Soundfield MKV or SPS422B serving as a standard for comparison. Recordings were made in natural environments of live performances, in a recording studio, and in an anechoic chamber. The results of analytical and direct listening tests of these recordings are discussed in this paper. A description of the experimental microphone and the recording setup is included.
Convention Paper 7676 (Purchase now)

P4-9 A New Reference Listening Room for Consumer, Professional, and Automotive Audio ResearchSean Olive, Harman International - Northridge, CA, USA
This paper describes the features, scientific rationale, and acoustical performance of a new reference listening room designed for the purpose of conducting controlled listening tests and psychoacoustic research for consumer, professional, and automotive audio products. The main features of the room include quiet and adjustable room acoustics, a high-quality calibrated playback system, an in-wall loudspeaker mover, and complete automated control of listening tests performed in the room.
Convention Paper 7677 (Purchase now)


Thursday, May 7, 14:00 — 18:00

P5 - Loudspeakers


Chair: John Vanderkooy, University of Waterloo - Waterloo, Ontario, Canada

P5-1 Estimating the Velocity Profile and Acoustical Quantities of a Harmonically Vibrating Loudspeaker Membrane from On-Axis Pressure DataRonald M. Aarts, Philips Research Europe - Eindhoven, The Netherlands, Technical University of Eindhoven, Eindhoven, The Netherlands; Augustus J. Janssen, Philips Research Europe - Eindhoven, The Netherlands
Formulas are presented for acoustical quantities of a harmonically excited resilient, flat, circular loudspeaker in an infinite baffle. These quantities are the sound pressure on-axis, far-field, directivity and the total radiated power. These quantities are obtained by expanding the velocity distribution in terms of orthogonal polynomials. For rigid and non-rigid radiators, this yields explicit, series expressions for both the on-axis and far-field pressure. In the reverse direction, a method of estimating velocity distributions from (measured) on-axis pressures by matching in terms of expansion coefficients is described. Together with the forward far-field computation scheme, this yields a method for assessment of loudspeakers in the far-field and of the total radiated power from (relatively near-field) on-axis data (generalized Keele scheme).
Convention Paper 7678 (Purchase now)

P5-2 Testing and Simulation of a Thermoacoustic Transducer PrototypeFotios Kontomichos, Alexandros Koutsioubas, John Mourjopoulos, Nikolaos Spiliopoulos, Alexandros Vradis, Stamatis Vassilantonopoulos, University of Patras - Patras, Greece
Thermoacoustic transduction is the transformation of thermal energy fluctuations into sound. Devices fabricated by appropriate materials utilize such a mechanism in order to achieve acoustic wave generation by direct application of an electrical audio signal and without the use of any moving components. A thermoacoustic transducer causes local vibration of air molecules resulting in a proportional pressure change. The present paper studies an implementation of this alternative audio transduction technique for a prototype developed on silicon wafer. Measurements of the performance of this hybrid solid state device are presented and compared to the theoretical principles of its operation, which are evaluated via simulations.
Convention Paper 7679 (Purchase now)

P5-3 Analysis of Viscoelasticity and Residual Strains in an Electrodynamic LoudspeakerIvan Djurek, Antonio Petosic, University of Zagreb - Zagreb, Croatia; Danijel Djurek, Alessandro Volta Applied Ceramics (AVAC) - Zagreb, Croatia
An electrodynamic loudspeaker was analyzed in three steps: (a) as a device supplied by the market, (b) removed upper suspension, and (c) dismantled assembly consisting only of vibrating spider and voice coil. In three steps, resonant frequency and stiffness were measured dynamically for driving currents up to 100 mA, whereas stiffness was also measured quasi-statically by the use of calibrated masses. It was found that widely quoted effect of decreasing resonant frequency, as plotted against driving current, comes from the residual strain in the vibrating material, and significant contribution is associated with the spider. When driving current increases residual strain is gradually compensated, giving rise to the minimum of stiffness, and further increase of resonant frequency is attributed to a common nonlinearity in the forced vibrating system.
Convention Paper 7680 (Purchase now)

P5-4 Forces in Cylindrical Metalized Film Audio CapacitorsPhilip J. Duncan, University of Salford, Greater Manchester, UK; Nigel Williams, Paul S. Dodds, ICW Ltd. - Wrexham, Wales, UK
This paper is concerned with the analysis of forces acting in metalized polypropylene film capacitors in use in loudspeaker crossover circuits. Capacitors have been subjected to rapid discharge measurements to investigate mechanical resonance of the capacitor body and the electrical forces that drive the resonance. The force due to adjacent flat current sheets has been calculated in order that the magnitude of the electro-dynamic force due to the discharge current can be calculated and compared with the electrostatic force due to the potential difference between the capacitor plates. The electrostatic force is found to be dominant by several orders of magnitude, contrary to assumptions in previous work where the electro-dynamic force is assumed to be dominant. The capacitor is then modeled as a series of concentric cylindrical conductors and the distribution of forces within the body of the capacitor is considered. The primary outcome of this is that the electrostatic forces act predominantly within the inner and outer turn of the capacitor body, while all of the forces acting within the body of the capacitor are balanced almost to zero. Experimental results where resonant acoustic emissions have been measured and analyzed are presented and discussed in the context of the model proposed.
Convention Paper 7682 (Purchase now)

P5-5 On the Use of Motion Feedback as Used in 4th Order SystemsStefan Willems, Denon & Marantz Holding, Premium Sound Solutions - Leuven Belgium; Guido D’Hoogh, Retired
Class D amplification allows the design of compact very high power amplifiers with a high efficiency. Those amplifiers are an excellent candidate for being used in compact high-powered subwoofers. The drawback of compact subwoofers is the nonlinear compression of the air inside the (acoustically) small box. Fourth order systems are beneficial over 2nd order systems due to their increased efficiency. To combine the best of both worlds, 4th order design and acoustically small enclosures, a feedback mechanism has been developed to reduce the nonlinear distortion found in compact high-powered subwoofers. Acceleration feedback on woofer systems is traditionally used in 2nd order systems. This paper discusses the use of an acceleration and velocity feedback system applied to a 4th order system.
Convention Paper 7683 (Purchase now)

P5-6 Mapping of the Loudspeaker Emission by the Use of Anemometric MethodDanijel Djurek, Alessandro Volta Applied Ceramics (AVAC) - Zagreb, Croatia; Ivan Djurek, Antonio Petosic, University of Zagreb - Zagreb, Croatia
Lateral wire anemometry (LWA) has been developed for recording of air vibration. Standard anemometry is founded upon the hot wire method, and wire temperature changes in the oscillating air velocity in the range 800-1000 °C, which is less suitable because of the proper heat emission from the wire. LWA deals only with the initial slope of the changing wire resistance, and subsequent Fourier analysis enables measurements of periodic air velocity. The probe has been developed for precise mapping of the air velocity field in the front of the membrane, and local power emission of the membrane may be evaluated in the region fitted to 0.15 cm2.
Convention Paper 7684 (Purchase now)

P5-7 Flat Panel Loudspeaker Consisting of an Array of Miniature TransducersDaniel Beer, Stephan Mauer, Sandra Brix, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Jürgen Peissig, Sennheiser Electronic GmbH & Co. KG - Wedemark, Germany
Multichannel audio reproduction systems like the Wave Field Synthesis (WFS) use a large number of small and closely spaced loudspeakers. The successful use of WFS requires, among other things, the ability of an "invisible” integration of loudspeakers in a room. Flat panel loudspeakers compared with conventional cone loudspeakers provide advantages in the space saved room integration because of their low manufactured depth. In this way flat panel loudspeakers can be found in furniture, media devices, or like pictures hung on the wall. Besides the integration, flat loudspeakers should provide at least the same good acoustical performance as conventional loudspeakers. This is indeed a problem, because the low depth negatively influences the acoustical quality of reproduction in the lower and middle frequency range. This paper demonstrates a new flat panel loudspeaker consisting of an array of miniature transducers.
Convention Paper 7685 (Purchase now)

P5-8 Subwoofer Loudspeaker System with Dynamic Push-Pull DriveDrazenko Sukalo, DSLab–Device Solution Laboratory - Munich, Germany
This paper examines the influence of mutual coupling between two driver-diaphragms driven by two electrical signals, each with a 90° phase shift on the voice-coil impedance curve. A new model of the system is described, and the effects are observed using the electrical circuit simulator PSpice. Finally, predicted and measured values are presented.
Convention Paper 7686 (Purchase now)


Thursday, May 7, 14:00 — 15:30

P6 - Multichannel Coding


P6-1 Adaptive Predictive Modeling of Stereo LPC with Application to Lossless Audio CompressionFlorin Ghido, Ioan Tabus, Tampere University of Technology - Tampere, Finland
We propose a novel method for exploiting the redundancy of stereo linear prediction coefficients by using adaptive linear prediction for the coefficients themselves. We show that an important proportion of the stereo linear prediction coefficients, on both the intrachannel and the interchannel parts, still contains important redundancy inherited from the signal. We can therefore significantly reduce the amplitude range of those LP coefficients by using adaptive linear prediction with orders up to 4, separately on the intrachannel and intrachannel parts. When integrated into asymmetrical OptimFROG, the new method obtains on average 0.29 percent improvement in compression with negligible increase in decoder complexity.
Convention Paper 7666 (Purchase now)

P6-2 A Study of MPEG Surround Configurations and Its Performance EvaluationEvelyn Kurniawati, Samsudin Ng, Sapna George, ST Microelectronics Asia Pacific Pte. Ltd. - Singapore
The standardization of MPEG Surround in 2007 opens a new range of possibility for low bit rate multichannel audio encoding. While ensuring backward compatibility with legacy decoder, MPEG Surround offers various configurations to upmix to the desired number of channels. The downmix stream, which can be in mono or stereo format, can be passed to transform, hybrid, or any other types of encoder. These options give us more than one possible combination to encode a multichannel stream at a specific bit rate. This paper presents a comparative study between those options in terms of their quality performance that will help us choose the most suitable configuration of MPEG Surround in a range of operating bit rate.
Convention Paper 7667 (Purchase now)

P6-3 Lossless Compression of Spherical Microphone Array RecordingsErik Hellerud, U. Peter Svensson, Norwegian University of Science and Technology - Trondheim, Norway
The amount of spatial redundancy for recordings from a spherical microphone array is evaluated using a low delay lossless compression scheme. The original microphone signals, as well as signals transformed to the spherical harmonics domain, are investigated. It is found that the correlation between channels is, as expected, very high for the microphone signals, in several di?erent acoustical environments. For the signals in the spherical harmonics domain, the compression gain from using inter-channel prediction is reduced, since this conversion results in many channels with low energy. Several alternatives for reducing the coding complexity are also investigated.
Convention Paper 7668 (Purchase now)


Thursday, May 7, 15:00 — 16:00

Audio for Games



Thursday, May 7, 16:00 — 17:00

Hearing and Hearing Loss Prevention



Thursday, May 7, 16:00 — 18:30

W4 - Microphones—What to Listen For—What Specs to Look For


Chair:
Eddy B. Brixen
Panelists:
Jean-Marie Greijsen, Polyhymnia International
David Josephson, Josephson Engineering
Douglas McKinnie, Middle Tennessee State University
Mikkel Nymand, DPA Microphones
Ossian Ryner, DR, Danish Broadcasting

Abstract:
When selecting microphones for a specific music recording, it is worth knowing what to expect and what to listen for. Accordingly it is good to know what specifications that would be optimum for that microphone. This workshop explains the process of selecting a microphone both from the aesthetical as well as the technical point of view. Also explained and demonstrated: what to expect when placing the microphone. This is not a “ I feel like . . .” presentation. All presenters on the panel are serious and experienced engineers and tonmeisters. The purpose of this workshop is to encourage and teach young engineers and students to take advantage by taking a closer look at the specifications the next time they are going to pick a microphone for a job.


Thursday, May 7, 16:30 — 18:30

W5 - Professional Audio Networking in Sound Reinforcement and Broadcast Applications


Chair:
Umberto Zanghieri, ZP Engineering srl
Panelists:
Bradford Benn, Crown International
David Revel, Technical Multimedia Design
Greg Shay, Axia Audio
Jérémie Weber, Auvitran

Abstract:
Several solutions are available on the market today for digital audio transfer over conventional data cabling. This workshop presents some commercially available solutions, with specific focus on noncompressed, low-latency audio transmission for pro-audio and live applications using standard IEEE 802.3 network technology. The main challenges of digital audio transport will be outlined, including reliability, latency, and deployment. Typical usage scenarios will be proposed, with specific emphasis on live sound reinforcement and broadcast applications.

This event promises a discussion of the challenges and planning involved with deploying digital audio in such scenarios.

The workshop will include a brief overview of potential evolutions related to pro audio networking.


Thursday, May 7, 16:30 — 18:00

P7 - Spatial Audio Processing


P7-1 Low Complexity Binaural Rendering for Multichannel SoundKangeun Lee, Changyong Son, Dohyung Kim, Samsung Advanced Institute of Technology - Suwon, Korea
The current paper is concerned with an effective method to emulate the multichannel sound in a portable environment where low power is required. The goal of this paper is to show the complexity of binaural rendering of the multichannel to stereo sound systems in cases of portable devices. To achieve this, we proposed the modified discrete cosine transform (MDCT) based binaural rendering, combined with the Dolby Digital decoder (AC-3) that is a multichannel audio decoder. A reverberation algorithm is added to the proposed algorithm for closing to real sound. This combined structure is implemented on a DSP processer. The complexity and quality are compared with a conventional head-related transfer function (HRTF) filtering method and Dolby headphone that are the most current in commercial binaural rending technology, demonstrating significant complexity reduction and comparable sound quality to the Dolby headphone.
Convention Paper 7687 (Purchase now)

P7-2 Optimal Filtering for Focused Sound Field Reproductions Using a Loudspeaker ArrayYoungtae Kim, Sangchul Ko, Jung-Woo Choi, Jungho Kim, SAIT, Samsung Electronics Co., Ltd. - Gyeonggi-do, Korea
This paper describes audio signal processing techniques in designing multichannel filters for reproducing an arbitrary spatial directivity pattern with a typical loudspeaker array. In designing the multichannel filters, some design criteria based on, for example, least-squares methods and the maximum energy array are introduced as non-iterative optimization techniques with a lower computational complexity. The abilities of the criteria are first evaluated with a given loudspeaker configuration for reproducing a desired acoustic property in a spatial area of interest. Also, additional constraints are considered to impose for minimizing the error between the amplitudes of actual and the desired spatial directivity pattern. Their limitations in practical applications are revealed by experimental demonstrations, and finally some guidelines are proposed in designing optimal filters.
Convention Paper 7688 (Purchase now)

P7-3 Single-Channel Sound Source Distance Estimation Based on Statistical and Source-Specific FeaturesEleftheria Georganti, Philips Research Europe - Eindhoven, The Netherlands, University of Patras, Patras, Greece; Tobias May, Technische Universiteit Eindhoven - Eindhoven, The Netherlands; Steven van de Par, Aki Härmä, Philips Research Europe - Eindhoven, The Netherlands; John Mourjopoulos, University of Patras - Patras, Greece
In this paper we study the problem of estimating the distance of a sound source from a single microphone recording in a room environment. The room effect cannot be separated from the problem without making assumptions about the properties of the source signal. Therefore, it is necessary to develop methods of distance estimation separately for different types of source signals. In this paper we focus on speech signals. The proposed solution is to compute a number of statistical and source-specific features from the speech signal and to use pattern recognition techniques to develop a robust distance estimator for speech signals. Experiments with a database of real speech recordings showed that the proposed model is capable of estimating source distance with acceptable performance for applications such as ambient telephony.
Convention Paper 7689 (Purchase now)

P7-4 Implementation of DSP-Based Adaptive Inverse Filtering System for ECTF EqualizationMasataka Yoshida; Haruhide Hokari; Shoji Shimada, Nagaoka University of Technology - Nagaoka, Niigata, Japan
The Head Related Transfer Function (HRTF) and the inverse Ear Canal Transfer Function (ECTF) must be accurately determined if stereo earphones are realized out-of-head sound localization (OHL) with high presence. However, the characteristics of ECTF depend on the type of earphone used and the number of earphone mounting and demounting operations. Therefore, we present a DSP-based adaptive inverse filtering system for ECTF equalization in this paper. The buffer composition and size of DSP were studied so as to implement operation processing. As a result, we succeeded in constructing a system that was able to work in the audio-band of 15 kHz with the sampling frequency of 44.1 kHz. Listening tests clarified that the effective estimation error of the adaptive inverse-ECTF for OHL was less than –11 dB with convergence time of about 0.3 seconds.
Convention Paper 7690 (Purchase now)

P7-5 Improved Localization of Sound Sources Using Multi-Band Processing of Ambisonic ComponentsCharalampos Dimoulas, George Kalliris, Konstantinos Avdelidis, George Papanikolaou, Aristotle University of Thessaloniki - Thessaloniki, Greece
The current paper focuses on the use of multi-band ambisonic-processing for improved sound source localization. Energy-based localization can be easily delivered using soundfield microphone pairs, as long as free field conditions and the single omni-directional-point-source model apply. Multi-band SNR-based selective processing improves the noise tolerance and the localization accuracy, eliminating the influence of reverberation and background noise. Band-related sound-localization statistics are further exploited to verify the single or multiple sound-sources scenario, while continuous spectral fingerprinting indicates the potential arrival of a new source. Different sound-excitation scenarios are examined (single /multiple sources, narrowband / wideband signals, time-overlapping, noise, reverberation). Various time-frequency analysis schemes are considered, including filter-banks, windowed-FFT and wavelets with different time resolutions. Evaluation results are presented.
Convention Paper 7691 (Purchase now)

P7-6 Spatial Audio Content Management within the MPEG-7 Standard of Ambisonic Localization and Visualization DescriptionsCharalampos Dimoulas, George Kalliris, Kostantinos Avdelidis, George Papanikolaou, Aristotle University of Thessaloniki - Thessaloniki, Greece
The current paper focuses on spatial audio video/imaging and sound field visualization using ambisonic-processing, combined with MPEG-7 description schemes for multi-modal content description and management. Sound localization can be easily delivered using multi-band ambisonic processing under free-field and single point-source excitation conditions, offering an estimate on the achieved accuracy. Sound source forward propagation models can be applied in case that confident localization accuracy has achieved, to visualize the corresponding sound field. Otherwise, 3-D audio/surround sound reproduction simulation can be used instead. In any case, sound level distribution colormap-videos and highlighting images can be extracted. MPEG-7 adapted description schemes are proposed for spatial-audio audiovisual content description and management, facilitating a variety of user-interactive postprocessing applications.
Convention Paper 7692 (Purchase now)


Thursday, May 7, 17:00 — 18:00

Semantic Audio Analysis



Thursday, May 7, 17:00 — 18:30

T4 - The Growing Importance of Mastering in the Home Studio Era


Chair:
Andres Mayo

Abstract:
Artists and producers are widely using their home studios for music production, with a better cost/benefit ratio. But they usually lack technical resources and the acoustic response of their rooms is unknown. Therefore, there is greater need for a professional mastering service in order to achieve the so-called "standard commercial quality." This tutorial presents a list of common mistakes that can be found in homemade mixes with real-life audio examples taken directly from recent mastering sessions. What can and what can’t be fixed at the mastering stage.


Thursday, May 7, 18:30 — 19:30

Heyser Lecture
followed by
Technical Council
Reception


Abstract:
The Richard C. Heyser distinguished lecturer for the 126th AES Convention is Gunnar Rasmussen, a pioneer in the construction of acoustic instrumentation, particularly of microphones, transducers, vibration and related devices. He was employed at Brüel & Kjær Denmark as an electronics engineer immediately after his graduation in 1950. After holding various positions in development, testing, and quality control, he spent one year in the United States working for Brüel & Kjær in sales and service.

After his return to Denmark in the mid-1950s he began the development of a new measurement microphone. This resulted in a superior mechanical stability, increased temperature, and long term stability. The resulting one-inch pressure microphone soon became the de facto standard microphone for acoustical measurements to replace the famous W.E. 640AA standardized microphone.

The optimized mechanical design of the new generation of measurement microphones opened up the possibility for reducing the size of the microphones, first to a ½” microphone and then to ¼” and 1/8” microphones with essentially the same superior mechanical, temperature and long term stability. Notably the ½” microphone is still the most widely used measurement tool today. Since the beginning of the 1960’s, this microphone design has been preferred for all types of acoustic measurements and has formed the basis for the IEC 1094 series of international standards for measurement microphones.

Gunnar Rasmussen received the Danish Design Award in 1969 for his novel design of the microphones that were exhibited at the New York Museum of Modern Art. He also developed the first acoustically optimized sound level meter, where the shape of the body was designed to minimize the effect of reflections from the casing to the microphone. This type 2203 Sound Level meter was for many years seen as the archetype of sound level meters and its characteristic shape became the symbol of a sound level meter.

Other major inventions and designs include the Delta Shear accelerometer, the dual piston pistonphone calibrator for precision calibration, the face-to-face sound intensity probe and hydrophones, occluded ears, artificial mouth, etc. Rasmussen is also the author of numerous papers on acoustics and vibration and has served as chairman and vice-chairman of various international organizations and standard committees. In 1990 he received the CETIM medal for his contribution to the field of intensity techniques. He is also a Fellow of the Acoustical Society of America.

In 1994 Rasmussen started his own company, G.R.A.S. Sound and Vibration. Originally a company specializing in precision Outdoor Microphones for permanent noise monitoring around airports, it is now one of the world’s leading companies in acoustic front-ends and transducers forming a wide range of general purpose and specialized microphones, electro-acoustic measurement devices such as ear couplers, precision calibration tools and multi-dimensional sound intensity probes. The title of his lecture is, “The Reproduction of Sound Starts at the Microphone.”

The microphones may be developed for many specific purposes: for communication, recording or precision measurements. Quality may have different meaning for different applications. Price may be a dominating factor. Carbon microphones were dominating up to the 1950s. Electret microphones have taken the place of carbon microphones with great improvement in quality and performance at low prices. The MEMS microphones are on the way.

The challenge in the high quality microphone development is to match or exceed the human ear in perception of sound for measurement purposes. Without measurements we cannot qualify our progress. We are still trying to match the frequency band, the dynamic range, the phase linearity of the human ear and to obtain very good reproducibility in all situations where humans are involved. We need microphones for development, for standardized measurements and for legal related measurements. Where are we today?


Friday, May 8, 09:00 — 10:00

Spatial Audio



Friday, May 8, 09:00 — 10:30

T5 - Lipsync in the Latency Age


Presenter:
Friedrich Gierlinger, IRT Munich

Abstract:
Synchronization of audio and video through the whole broadcast chain became ever more challenging with the switch to fully digital production methods, wireless picture acquisition practices, omnipresent different processing latency, and an unavoidable delay in consumer displays that may or may not be able to be compensated. This tutorial examines systematically examines the various reasons why lipsync problems occur, quantifies the possible error at any given stage, and recommends best practices to improve the situation.


Friday, May 8, 09:00 — 11:00

W6 - The Opera Behind the Opera—Challenges and Real World Solutions in Transmissions of Big Opera Productions


Presenters:
Billy Henningsen, Senior Sound Engineer, Norwegian TV - Oslo, Norway
Joseph Schütz, Senior Engineer, Austrian Radio - Vienna, Austria
Peter Urban, Senior Engineer, Bavarian Radio - Munich, Germany

Abstract:
In this workshop the aesthetics and concepts of an ideal opera sound will be introduced, theoretically. Be swept away by the realities of awkward acoustics, directors and producers with challenging stage directions and last minute ideas. Both Bayreuth and Vienna, the new opera of Oslo, and all the other big stages have had their transmissions and you will hear how the challenges have been met. In stereo and surround.


Friday, May 8, 10:00 — 11:00

Signal Processing



Friday, May 8, 10:00 — 13:30

TT3 - Herkulessaal der Residenz


Abstract:
A comparison of microphone settings for a live broadcast of a symphonic concert in 5.1 and stereo at the Bavarian Radio will be presented. The participants will have the opportunity to compare the settings themselves at the console and to make their own experiences with a 5.1 mix via multitrack recording under different ambience-mic-arrays. Presenters are Wolfram Graul and Klemens Kamp. Tour is limited to 10 people and transportation is not provided.


Price: Free

Friday, May 8, 10:30 — 12:00

P10 - Audio for Telecommunications


P10-1 Harmonic Representation and Auditory Model-Based Parametric Matching and its Application in Speech/Audio AnalysisAlexey Petrovsky, Elias Azarov, Belarusian State University of Informatics and Radioelectronics - Minsk, Belarus; Alexander Petrovsky, Bialystok Technical University - Bialystok, Poland
The paper presents new methods for the selection of sinusoids and transients components in hybrid sinusoidal modeling of speech/audio. The instantaneous harmonic parameters (magnitude, frequency, and phase) are calculated as the result of the narrow band filtering of speech/audio. The frequency-modulated filters synthesis with the closed form impulse response has been proposed. The filter frequency bounds can be determined during the components frequency tracking and can be adjusted according to the fundamental frequency modulations. It can be implemented speech/audio harmonic/noise decomposition. The transient components modeling are presented by matching pursuit with frame-based psychoacoustic optimized wavelet packet dictionary. The choice of most relevant coefficients is based on maximizing the matching between the auditory excitation scalograms of original and modeled signals.
Convention Paper 7705 (Purchase now)

P10-2 Perceptual Compression Methods for Metadata in Directional Audio Coding Applied to Audiovisual TeleconferenceToni Hirvonen, Institute of Computer Science (ICS) of the Foundation for Research and Technology - Hellas, Greece; Jukka Ahonen, Ville Pulkki, TKK - Finland
In teleconferencing application of Directional Audio Coding, the transmitted data consists of monophonic audio signal and directional metadata measured in frequency bands depending on time. In reproduction, each frequency channel of the signal is reproduced to corresponding direction with corresponding diffuseness. This paper examines methods for reducing the data rate of the metadata. The compression methods are based on psychoacoustic studies about the accuracy of directional hearing, and further developed and validated. Informal tests with one-way reproduction, as well as usability testing where an actual teleconference was arranged, were utilized for this purpose. The results indicate that the data rate can be as low as approximately 3 kbit/s without a significant loss in the reproduced spatial quality.
Convention Paper 7706 (Purchase now)

P10-3 Speaker Detection and Separation with Small Microphone ArraysMaximo Cobos, Jose J. Lopez, David Martinez, Universidad Politécnica de Valencia - Valencia, Spain
Small microphone arrays are desirable for many practical speech processing applications. In this paper we describe a system for detecting several sound sources in a room and enhancing a predominant target source using a pair of close microphones. The system consists of three main steps: time-frequency processing of the input signals, source localization via model fitting, and time-frequency masking for interference reduction. Experiments and results using recorded signals in real scenarios are discussed.
Convention Paper 7707 (Purchase now)

P10-4 Directional Audio Coding with Stereo Microphone InputJukka Ahonen, Ville Pulkki, TKK - Finland; Fabian Kuech, Giovanni Del Galdo, Markus Kallinger, Richard Schultz-Amling, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
The use of stereo microphone configuration as input to teleconference application of Directional Audio Coding (DirAC) is presented. DirAC is a method for spatial sound processing, in which the direction of the arrival of sound and diffuseness are analyzed and used for different purposes in reproduction. So far, omnidirectional microphones arranged in an array have been used to generate input signals for one- and two-dimensional sound field analysis in DirAC processing. In this study the possibility to use domestic stereo microphones with DirAC analysis is investigated. Different methods to derive omnidirectional and dipole signals from stereo microphones for directional analysis are presented and their applicability is discussed.
Convention Paper 7708 (Purchase now)

P10-5 Robust Noise Reduction Based on Stochastic Spatial FeaturesMitsunori Mizumachi, Kyushu Institute of Technology - Fukuoka, Japan
This paper proposes a robust noise reduction method relying on stochastic spatial features. Almost all of noise reduction methods have both strong and weak sides in the real world. In this paper time evolution of direction of arrival (DOA) and its stochastic reliability are the clues for selecting a suitable approach of noise reduction under time-variant noisy environments, where a DOA is an important spatial feature in beamforming for noise reduction. On the other hand, single channel approaches for noise reduction may be reasonable when DOA estimates are not reliable. Then, either spectral subtraction or beamforming is selected out for achieving robust noise reduction depending on a DOA estimate and its reliability. The proposed method had an advantage in noise reduction compared with a conventional approach.
Convention Paper 7709 (Purchase now)


Friday, May 8, 11:00 — 12:00

Microphones and Applications



Friday, May 8, 11:00 — 13:30

T6 - Loudness—Light at the End of the Tunnel


Chair:
Florian Camerer, ORF, EBU Group P/LOUD
Presenters:
Eelco Grimm, Dutch Loudness Committee
Mike Kahsnitz, rtw
Ralph Kessler, Pinguin Engineering
Thomas Lund, tc electronic
Andrew Mason, BBC R&D

Abstract:
Audio levels in broadcasting have become increasingly diverse and different over the last decades. Despite clear guidelines and recommended practices the general use of peak measurement in audio metering and the development of more and more sophisticated level processors have led to over-compression of audio signals with the questionable aim of being louder than the competition. This attitude has especially impacted the audio quality of advertisements and promos with very little dynamic range. Already considered a hopeless situation, the introduction of loudness level metadata and especially the introduction of an international standard of loudness measurement (ITU-R BS.1770) is a light at the end of the tunnel. A few broadcasters and even whole countries have addressed the loudness issue thoroughly, and their experience shows that it is possible to solve that problem to the advantage of the consumer. It is long overdue to establish a new paradigm in audio levelling: the switch from peak normalization to loudness normalization. With widespread adoption of this approach consistent loudness not only within a channel, but also between different channels will be within reach—thus finding the “Holy Grail” of audio broadcasting.

In this session the current situation from the perspective of the EBU Group “P/LOUD” will be examined. Vendors will present their approaches to loudness metering.


Friday, May 8, 11:30 — 13:30

T7 - Binaural Audio Technology—History, Current Practice, and Emerging Trends


Chair:
Robert Schulein, RBS Consultants

Abstract:
During the winter and spring of 1931-32, Bell Telephone Laboratories, in cooperation with Leopold Stokowski and the Philadelphia Symphony Orchestra, undertook a series of tests of musical reproduction using the most advanced apparatus obtainable at that time. The objectives were to determine how closely an acoustic facsimile of an orchestra could be approached using both stereo loudspeakers and binaural reproduction. Detailed documents discovered within the Bell Telephone archives will serve as a basis for describing the results and problems revealed while creating the binaural demonstrations. Since these historic events, interest in binaural recording and reproduction has grown in areas such as sound field recording, acoustic research, sound field simulation, audio for electronic games, music listening, and artificial reality. Each of theses technologies has its own technical concerns involving transducers, environmental simulation, human perception, position sensing, and signal processing. This tutorial will cover the underlying principles germane to binaural perception, simulation, recording, and reproduction. It will include live demonstrations as well as recorded audio/visual examples.


Friday, May 8, 11:30 — 13:30

W8 - We Have You Surrounded—Mastering for Multichannel


Chair:
Darcy Proper, Senior Mastering Engineer, Galaxy Studios - Mol, Belgium
Panelists:
Simon Heyworth, Owner/Chief Engineer, Super Audio Mastering - Devon, UK
Thor Levgold, Owner/Chief Engineer, Sonovo Mastering - Stavanger, Norway

Abstract:
With the increasing adoption of home cinema systems, DVD releases of performing artists as well as the EBU specifying 5.1 surround audio as standard for HDTV, the need for professional mastering in multichannel formats is growing. For many, working in surround represents a change of paradigm and brings with it unique challenges and requirements. The panelists in this workshop will share some of the types of challenges they face in the course of their work and some
practical tips for handling them when they arise in stereo, surround, and multimedia productions.


Friday, May 8, 12:00 — 13:00

Loudspeakers and Headphones



Friday, May 8, 13:00 — 16:30

P11 - Audio Coding


Chair: Nick Zacharov

P11-1 A Novel Scheme for Low Bit Rate Unified Speech and Audio Coding—MPEG RM0Max Neuendorf, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Philippe Gournay, Université de Sherbrooke - Sherbrooke, Quebec, Canada; Markus Multrus, Jérémie Lecomte, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Bruno Bessette, Université de Sherbrooke - Sherbrooke, Quebec, Canada; Ralf Geiger, Stefan Bayer, Guillaume Fuchs, Johannes Hilpert, Nikolaus Rettelbach, Frederik Nagel, Julien Robilliard, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Redwan Salami, VoiceAge Corporation - Montreal, Quebec, Canada; Gerald Schuller, Fraunhofer IDMT - Ilmenau, Germany; Roch Lefebvre, Université de Sherbrooke - Sherbrooke, Quebec, Canada; Bernhard Grill, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Coding of speech signals at low bit rates, such as 16 kbps, has to rely on an efficient speech reproduction model to achieve reasonable speech quality. However, for audio signals not fitting to the model this approach generally fails. On the other hand, generic audio codecs, designed to handle any kind of audio signal, tend to show unsatisfactory results for speech signals, especially at low bit rates. To overcome this, a process was initiated by ISO/MPEG, aiming to standardize a new codec with consistent high quality for speech, music, and mixed content over a broad range of bit rates. After a formal listening test evaluating several proposals MPEG has selected the best performing codec as the reference model for the standardization process. This paper describes this codec in detail and shows that the new reference model reaches the goal of consistent high quality for all signal types.
Convention Paper 7713 (Purchase now)

P11-2 A Time-Warped MDCT Approach to Speech Transform CodingBernd Edler, Sascha Disch, Leibniz Universität Hannover - Hannover, Germany; Stefan Bayer, Guillaume Fuchs, Ralf Geiger, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
The modified discrete cosine transform (MDCT) is often used for audio coding due to its critical sampling property and good energy compaction, especially for harmonic tones with constant fundamental frequencies (pitch). However, in voiced human speech the pitch is time-varying and thus the energy is spread over several transform coefficients, leading to a reduction of coding efficiency. The approach presented herein compensates for pitch variation in each MDCT block by application of time-variant re-sampling. A dedicated signal adaptive transform window computation ensures the preservation of the time domain aliasing cancellation (TDAC) property. Re-sampling can be designed such that the duration of the processed blocks is not altered, facilitating the replacement of the conventional MDCT in existing audio coders.
Convention Paper 7710 (Purchase now)

P11-3 A Phase Vocoder Driven Bandwidth Extension Method with Novel Transient Handling for Audio CodecsFrederik Nagel, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Sascha Disch, Leibniz Universitaet Hanover - Hanover, Germany; Nikolaus Rettelbach, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Storage or transmission of audio signals is often subject to strict bit-rate constraints. This is accommodated by audio encoders that encode the lower frequency part in a waveform preserving way and approximate the high frequency signal from the lower frequency data by using a set of reconstruction parameters. This so called bandwidth extension can lead to roughness and other unpleasant auditory sensations. In this paper the origin of these artifacts is identified, and an improved bandwidth extension method called Harmonic Bandwidth Extension (HBE) is outlined avoiding auditory roughness in the reconstructed audio signal. Since HBE is based on phase vocoders, and thus intrinsically not well suited for transient signals, an enhancement of the method by a novel transient handling approach is presented. A listening test demonstrates the advantage of the proposed method over a simple phase vocoder approach.
Convention Paper 7711 (Purchase now)

P11-4 Efficient Cross-Fade Windows for Transitions between LPC-Based and Non-LPC-Based Audio CodingJérémie Lecomte, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Philippe Gournay, Université de Sherbrooke - Sherbrooke, Quebec, Canada; Ralf Geiger, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Bruno Bessette, Université de Sherbrooke - Sherbrooke, Quebec, Canada; Max Neuendorf, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
The reference model selected by MPEG for the forthcoming unified speech and audio codec (USAC) switches between a non-LPC-based coding mode (based on AAC) operating in the transform-domain and an LPC-based coding (derived from AMR-WB+) operating either in the time domain (ACELP) or in the frequency domain (wLPT). Seamlessly switching between these different coding modes required the design of a new set of cross-fade windows optimized to minimize the amount of overhead information sent during transitions between LPC-based and non-LPC-based coding. This paper presents the new set of windows that was designed in order to provide an adequate trade-off between overlap duration and time/frequency resolution, and to maintain the benefits of critical sampling through all coding modes.
Convention Paper 7712 (Purchase now)

P11-5 Low Bit-Rate Audio Coding in Multichannel Digital Wireless Microphone SystemsStephen Wray, APT Licensing Ltd. - Belfast, Northern Ireland, UK
Despite advances in voice and data communications in other domains, sound production for live events (concerts, theater, conferences, sports, worship, etc.) still largely depends on spectrum-inefficient forms of analog wireless microphone technology. In these live scenarios, low-latency transmission of high-quality audio is mission critical. However, while demand increases for wireless audio channels (for microphones, in-ear monitoring, and talkback systems), some of the radio bands available for “Program Making and Special Events” are to be re-assigned for new wireless mobile telephony and Internet connectivity services: the FCC recently decided to permit so-called White Space Devices to operate in sections of UHF spectrum previously reserved for shared use by analog TV and wireless microphones. This paper examines the key performance aspects of low bit-rate audio codecs for the next generation of bandwidth-efficient digital wireless microphone systems that meet the future needs of live events.
Convention Paper 7714 (Purchase now)

P11-6 Krasner’s Audio Coder RevisitedJamie Angus, Chris Ball, Thomas Peeters, Rowan Williams, University of Salford - Salford, Greater Manchester, UK
An audio compression encoder and decoder system based on Krasner’s work was implemented. An improved Quadrature Mirror Filter tree, which more closely approximates modern critical band measurements, splits the input signal into sub bands that are encoded using both adaptive quantization and entropy coding. The uniform adaptive quantization scheme developed by Jayant was implemented and enhanced through the addition of non-uniform quantization steps and look ahead. The complete codecs are evaluated using the perceptual audio evaluation algorithm PEAQ and their performance compared to equivalent MPEG-1 Layer III files. Initial, limited, tests reveal that the proposed codecs score Objective Difference Grades close to or even better than MPEG-1 Layer III files encoded at a similar bit rate.
Convention Paper 7715 (Purchase now)

P11-7 Inter-Channel Prediction to Prevent Unmasking of Quantization Noise in BeamformingMauri Väänänen, Nokia Research Center - Tampere, Finland
This paper studies the use of inter-channel prediction for the purpose of preventing or reducing the risk of noise unmasking when beamforming type of processing is applied to quantized microphone array signals. The envisaged application is the re-use and postprocessing of user-created content. Simulations with an AAC coder and real-world recordings using two microphones are performed to study the suitability of two existing coding tools for this purpose: M/S stereo coding and the AAC Long Term Predictor (LTP) tool adapted for inter-channel prediction. The results indicate that LTP adapted for inter-channel prediction often gives more coding gain than mere M/S stereo coding, both in terms of signal-to-noise ratio and perceptual entropy.
Convention Paper 7716 (Purchase now)


Friday, May 8, 13:30 — 17:30

TT5 - Stadtmuseum Musikinstrumente


Abstract:
Museum of the City of Munich Instruments

The extraordinary collection of the Sammlung Musik-Münchner Stadtmuseum presents exhibits highlighting the construction of musical instruments from different cultures as well as a wide survey of the musical activities of mankind. On show are about 1500 musical instruments from Africa, Asia, the precolonial Americas, and Europe out of a total 6,000 objects. During the guided tour of the collections visitors have the opportunity to play the complete gamelans from the Indonesian Islands of Java and Bali.


Price: EUR 20

Friday, May 8, 13:30 — 17:30

TT6 - Herkulessaal der Residenz


Abstract:
A comparison of microphone settings for a live broadcast of a symphonic concert in 5.1 and stereo at the Bavarian Radio will be presented. The participants will have the opportunity to compare the settings themselves at the console and to make their own experiences with a 5.1 mix via multitrack recording under different ambience-mic-arrays. Presenters are Wolfram Graul and Klemens Kamp. Tour is limited to 10 people and transportation is not provided.


Price: Free

Friday, May 8, 13:30 — 15:00

P12 - Loudspeakers


P12-1 Reduction of Distortion in Conical Horn Loudspeakers at High LevelsSverre Holm, University of Oslo - Oslo, Norway; Rune Skramstad, Paragon Arrays - Drammen, Norway
Many horns have audible distortion at high levels. We measured a horn consisting of 6 conical sections with a 10-inch element at 99 dB SPL. A closed back gave maximum 2.4 percent second harmonic and 3.4 percent third harmonic distortion in the 100–1000 Hz range, while an open construction had 1.25 percent and 0.6 percent. A new semi-permeable back chamber reduced this to 0.7 percent and 0.35 percent. We hypothesize that the distortion is partly due to the non-linear compliance of air in the back chamber, and partly is due to the element’s interaction with the front and back loading of the horn, and that the new construction loads the element in a more optimal way.
Convention Paper 7717 (Purchase now)

P12-2 Comparison of Different Methods for the Subjective Sound Quality Evaluation of Compression DriversJosé Martínez, Acustica Beyma S.L. - Valencia, Spain; Joan Croañes, Escola Politecnica Superior de Gandia - Valencia, Spain; Jorge Francés Monllor Jaime Ramis, Universidad de Alicante - Alicante, Spain
In this paper an approach to the problem of sound quality evaluation of radiating systems is considered, applying a perceptual model. One of the objectives is to use the parameter proposed by Moore [. . .] to test if it provides satisfactory results when it is applied to the quality evaluation of indirect radiation loudspeakers. Three compression drives have been used for these proposals. Recordings with different test signals at different input voltages have been done. Using this experimental base, an approach to the problem from different points of view is done: [. . .] Taking in consideration classic sound quality parameters such as roughness, sharpness, and tonality. [. . .] Applying the parameter suggested by Moore obtained from the application of a perceptual model. Moreover, a psychoacoustic experiment has been made on a population of 25 people. The results, although preliminary and strongly dependant on the reference signal used to obtain Rnonlin, show a good correlation with the Rnonlin values.
Convention Paper 7718 (Purchase now)

P12-3 Membrane Modes in Transducers with the Direct D/A ConversionLibor Husník, Czech Technical University in Prague - Prague, Czech Republic
Operating principle of systems with the direct acoustic D/A conversion, which are sometimes called digital loudspeakers, brings new features to the field of transducer design. There are many design possibilities to these systems, using different transduction principles and spatial arrangement of constituting parts. This paper deals with the single-acting condenser transducer, suitable for micromachining applications, in which the membrane is driven by a partitioned back electrode. While in conventional transducers the electric force between the back electrode and the membrane is evenly distributed, in digital transducers it is no longer the case. Consequences to membrane vibrations for some cases of excitation by various distributions of forces representing given binary combinations from the dynamic level are presented.
Convention Paper 7719 (Purchase now)

P12-4 Increasing Active Radiating Factor of High-Frequency Horns by Using Staggered Arrangement in Loudspeaker Line ArrayKang An, Yong Shen, Aiping Zhang, Nanjing University - Nanjing, China
Active Radiating Factor (ARF) is an important parameter to analyze the loudspeaker line array when considering the gap between each two transducers, especially for high-frequency horns. As ARF is desired to be as high as possible, the staggered arrangement of horns is introduced in this paper. The responses in vertical direction and horizontal direction are analyzed. Compared with the conventional arrangement, the negative effects of gaps are reduced and responses are improved in simulation.
Convention Paper 7720 (Purchase now)


Friday, May 8, 14:00 — 15:00

Network Audio Systems



Friday, May 8, 14:00 — 16:00

T8 - Microphone History


Chair:
Jörg Wuttke, Schoeps, Technical Director Emeritus
Presenters:
Ulrich Apel, Microtech Gefell GmbH
Sean Davies, S.W. Davies
Stephan Peus, Neumann GmbH

Abstract:
This tutorial will be presented in 3 parts.

Stephan Peus' presentation, "35 Years of Microphone Development at Neumann—What Touched Us, What Moved Us," gives an insight to specific development topics and to some very special test procedures including: microphone’s transient response: insights beyond frequency response or polar pattern; RF susceptibility: already a topic before the era of mobile phones; capsule distortion measurement: difficult procedure giving a lot of interesting results; dynamic range and self noise level of studio microphones: a remarkable development within the 35 years in question.

Ulrich Apel will report on "The Importance of Vacuum for Condenser Microphones." He will speak on such topics as: the electron-tube was and is still an important step in the development of condenser microphones; the construction of special-made tubes for use in mics such as RE084k, Hiller MSC2, Telefunken AC701k, EF804, Valvo EF86, 6072, etc.; and special measuring capabilities to select tubes regarding noise, stability. and sound.

Sean Davies' presentation is "Microphone History: The Why, The How, and The Who." The developments in microphone technology are reviewed from the earliest telephone based type through the decades as far as the 1970s. The “Why” section looks at the reasons behind the different designs, e.g., directional characteristics, output signal levels, diffraction effects, frequency range. The “How” examines the solutions proposed for the “Why” section, and the “Who” identifies the landmark designs and the designers behind them.


Friday, May 8, 14:00 — 18:30

P13 - Spatial Audio and Spatial Perception


Chair: Tapio Lokki

P13-1 Evaluation of Equalization Methods for Binaural SignalsZora Schärer, Alexander Lindau, TU Berlin - Berlin, Germany
The most demanding test criterion for the quality of binaural simulations of acoustical environments is whether they can be perceptually distinguished from a real sound field or not. If the simulation provides a natural interaction and sufficient spatial resolution, differences are predominantly perceived in terms of spectral distortions due to a non-perfect equalization of the transfer functions of the recording and reproduction systems (dummy head microphones, headphones). In order to evaluate different compensation methods, several headphone transfer functions were measured on a dummy head. Based upon these measurements, the performance of different inverse filtering techniques re-implemented from literature was evaluated using auditory measures for spectral differences. Additionally, an ABC/HR listening test was conducted, using two different headphones and two different audio stimuli (pink noise, acoustical guitar). In the listening test, a real loudspeaker was directly compared to a binaural simulation with high spatial resolution, which was compensated using seven different equalization methods.
Convention Paper 7721 (Purchase now)

P13-2 Crosstalk Cancellation between Phantom SourcesFlorian Völk, Thomas Musialik, Hugo Fastl, Technical University of München - München, Germany
This paper presents an approach using phantom sources (resulting from the so-called summing localization of two loudspeakers) as sources for crosstalk cancellation (CTC). The phantom sources can be rotated synchronously with the listener’s head, thus demanding significantly less processing power than traditional approaches using fixed CTC loudspeakers, as an online re-computation of the CTC filters is (under certain circumstances) not necessary. First results of localization experiments show the general applicability of this procedure.
Convention Paper 7722 (Purchase now)

P13-3 Preliminary Evaluation of Sweet Spot Size in Virtual Sound Reproduction Using DipolesYesenia Lacouture Parodi, Per Rubak, Aalborg University - Aalborg, Denmark
In a previous study, three crosstalk cancellation techniques were evaluated and compared under different conditions. Least square approximations in frequency and time domain were evaluated along with a method based on minimum-phase approximation and a frequency independent delay. In general, the least square methods outperformed the method based on minimum-phase approximation. However, the evaluation was only done for the best-case scenario, where the transfer functions used to design the filters correspond to the listener’s transfer functions and his/her location and orientation relative to the loudspeakers. In this paper we present a follow-up evaluation of the performance of the three inversion techniques when these conditions are violated. A setup to measure the sweet spot of different loudspeaker arrangements is described. Preliminary measurement results are presented for loudspeakers placed at the horizontal plane and an elevated position, where a typical 60-degree stereo setup is compared with two closely spaced loudspeakers. Additionally, two- and four-channel arrangements are evaluated.
Convention Paper 7723 (Purchase now)

P13-4 The Importance of the Direct to Reverberant Ratio in the Perception of Distance, Localization, Clarity, and EnvelopmentDavid Griesinger, Consultant - Cambridge, MA, USA
The Direct to Reverberant ratio (D/R)—the ratio of the energy in the first wave front to the reflected sound energy—is absent from most discussions of room acoustics. Yet only the direct sound (DS) provides information about the localization and distance of a sound source. This paper discusses how the perception of DS in a reverberant field depends on the D/R and the time delay between the DS and the reverberant energy. Threshold data for DS perception will be presented, and the implications for listening rooms, hall design, and electronic enhancement will be discussed. We find that both clarity and envelopment depend on DS detection. In listening rooms the direct sound must be at least equal to the total reflected energy for accurate imaging. As the room becomes larger (and the time delay increases) the threshold goes down. Some conclusions: typical listening rooms benefit from directional loudspeakers, small concert halls should not have a shoe-box shape, early reflections need not be lateral, and electroacoustic enhancement of late reverberation may be vital in small halls.
Convention Paper 7724 (Purchase now)

P13-5 Frequency-Domain Interpolation of Empirical HRTF DataBrian Carty, Victor Lazzarini, National University of Ireland - Maynooth, Ireland
This paper discusses Head Related Transfer Function (HRTF)-based artificial spatialization of audio. Two alternatives to the minimum phase method of HRTF interpolation are suggested, offering novel approaches to the challenge of phase interpolation. A phase truncation, magnitude interpolation technique aims to avoid complex preparation, manipulation or transformation of empirical HRTF data, and any inaccuracies that may be introduced by these operations. A second technique adds low frequency nonlinear frequency scaling to a functionally based phase model. This approach aims to provide a low frequency spectrum more closely aligned to the empirical HRTF data. Test results indicate favorable performance of the new techniques.
Convention Paper 7725 (Purchase now)

P13-6 Analysis and Implementation of a Stereophonic Play Back System for Adjusting the “Sweet Spot” to the Listener’s PositionSebastian Merchel, Stephan Groth, Dresden University of Technology - Dresden, Germany
This paper focuses on a stereophonic play back system designed to adjust the “sweet spot” to the listener’s position. The system includes an optical face tracker that provides information about the listener’s x-y position. Accordingly, the loudspeaker signals are manipulated in real-time in order to move the “sweet spot.” The stereophonic perception with an adjusted “sweet spot” is theoretically investigated on the basis of several models of binaural hearing. The results indicate that an adjustment of signals corresponding to the center of the listener’s head does improve the localization over the whole listening area. Although some localization error remains due to asymmetric signal paths for off-center listening positions, which can be estimated and compensated for.
Convention Paper 7726 (Purchase now)

P13-7 Issues on Dummy-Head HRTFs MeasurementsDaniela Toledo, Henrik Møller, Aalborg University - Aalborg, Denmark
The dimensions of a person are small compared to the wavelength at low frequencies. Therefore, at these frequencies head-related transfer functions (HRTFs) should decrease asymptotically until they reach 0 dB—i.e., unity gain—at DC. This is not the case in measured HRTFs: the limitations of the equipment used result in a wrong—and random—value at DC and the effect can be seen well within the audio frequencies. We have measured HRTFs on a commercially available dummy-head Neumann KU-100 and analyzed issues associated to calibration, DC correction, and low-frequency response. Informal listening tests suggest that the ripples seen in HRTFs with a wrong DC value affect the sound quality in binaural synthesis.
Convention Paper 7727 (Purchase now)

P13-8 Binaural Processing Algorithms: Importance of Clustering Analysis for Preference TestsAndreas Silzle, Bernhard Neugebauer, Sunish George, Jan Plogsties, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
The acceptability of a newly proposed technology for commercial application is often assumed if the sound quality reached in a listening test surpasses a certain target threshold. As an example, it is a well-established procedure for decisions on the deployment of audio codecs to run a listening test comparing the coded/decoded signal with the uncoded reference signal. For other technologies, e.g., upmix or binaural processing, however, the unprocessed signal only can act as a "comparison signal." Here, the goal is to achieve a significant preference of the processed over the comparison signal. For such preference listening tests, we underline the importance of clustering the test results to obtain additional valuable information, as opposed to using the standard statistic metrics like mean and confidence interval. This approach allows determining the size of the user group that significantly prefers to use the proposed algorithm when it would be available in a consumer device. As an example, listening test data for binaural processing algorithms are analyzed in this investigation.
Convention Paper 7728 (Purchase now)

P13-9 Perception of Head-Position-Dependent Variations in Interaural Cross-Correlation CoefficientRussell Mason, Chungeun Kim, Tim Brookes, University of Surrey - Guildford, Surrey, UK
Experiments were undertaken to elicit the perceived effects of head-position-dependent variations in the interaural cross-correlation coefficient of a range of signals. A graphical elicitation experiment showed that the variations in the IACC strongly affected the perceived width and depth of the reverberant environment, as well as the perceived width and distance of the source. A verbal experiment gave similar results and also indicated that the head-position-dependent IACC variations caused changes in the perceived spaciousness and envelopment of the stimuli.
Convention Paper 7729 (Purchase now)


Friday, May 8, 15:00 — 16:00

Human Factors in Audio Systems



Friday, May 8, 15:00 — 16:30

W9 - Mixing Sports in 5.1–Part 1


Chair:
Gerhard Stoll, IRT Munich
Panelists:
Dennis Baxter, Sound for the Olympics - USA
Beat Joss, tpc AG - Switzerland
Ales Koman, Slovenia TV - Slovenia

Abstract:
Sports has proven to be a major driving force for the introduction of HDTV into the market. Events like the Football World Championships 2006, the Football European Championships in 2008, and the 2008 Bejing Olympics provide an excellent opportunity to showcase the strengths of high resolution pictures. Teaming up with HD picture is High Definition Surround Sound and a generally more elaborate and creative sound design. The challenges are manifold:

• from capturing a roaring stadium crowd of 80,000 to hearing the details of every ball kick, the so-called “close ball”;
• from producing a surround mix to still serving your stereo-viewers with an appropriate downmix and an intelligible commentator;
• from getting your signal to and through your broadcast center unharmed to arriving at the consumer in sync with the picture;
• from making meaningful use of LFE to using the center channel not only for commentary but for the "sound of sports" as well.

A panel of experienced protagonists will discuss these issues and other challenges. Examples will give the audience the chance to judge the effectiveness themselves.

In Part 2 of this workshop a multitrack recording of one of the finals of the Swiss Ice-Hockey Championship 2009 will be mixed live for the audience to witness the approach to creating a compelling surround sound field, which includes the atmosphere of thousands of bawling, booing, and applauding fans and the “sounds of the match,” which you see on your screen.


Friday, May 8, 16:30 — 18:30

W10 - Audio Network Control Protocols


Chair:
Richard Foss, Rhodes University - Grahamstown, South Africa
Panelists:
John Grant, Nine Tiles
Robby Gurdan, UMAN
Rick Kreifeldt, Harman Professional
Philip Nye, Engineering Arts

Abstract:
With the advent of digital audio networking, there has been a need to manage the connection of devices on networks and also to control and access various parameters of the devices. A number of standard and proprietary protocols have been developed. In this workshop a panel of experts who have helped develop some of these protocols will discuss their approaches and attempt to define a way forward, whereby devices with differing protocol implementations can communicate.


Friday, May 8, 16:30 — 18:30

P14 - Multichannel Coding


Chair: Ville Pulkki

P14-1 Further EBU Tests of Multichannel Audio CodecsDavid Marston, BBC R&D - Tadworth, Surrey, UK; Franc Kozamernik, EBU - Geneva, Switzerland; Gerhard Stoll, Gerhard Spikofski, IRT - Munich, Germany
The European Broadcasting Union technical group D/MAE has been assessing the quality of multichannel audio codecs in a series of subjective tests. The two most recent tests and results are described in this paper. The first set of tests covered 5.1 multichannel audio emission codecs at a range of bit-rates from 128 kbit/s to 448 kbit/s. The second set of tests covered cascaded contribution codecs, followed by the most prominent emission codecs. Codecs under test include offerings from Dolby, DTS, MPEG, Apt, and Linear Acoustics. The conclusions observe that while high quality is achievable at lower bit-rates, there are still precautions to be aware of. The results from cascading of codecs have shown that the emission codec is usually the bottleneck of quality.
Convention Paper 7730 (Purchase now)

P14-2 Spatial Parameter Decision by Least Squared Error in Parametric Stereo Coding and MPEG SurroundChi-Min Liu, Han-Wen Hsu, Yung-Hsuan Kao, Wen-Chieh Lee, National Chiao Tung University - Hsinchu, Taiwan
Parametric stereo coding (PS) and MPEG Surround (MPS) are used to reconstruct stereo or multichannel signals from down-mixed signals with a few spatial parameters. For extracting spatial parameters, the first issue is to decide a time-frequency (T-F) tile that controls the resolution of reconstructed spatial scenes and highly determines the amount of consumed bits. On the other hand, according to the standard syntax, the up-mixing matrices for time slots not on time borders are reconstructed by interpolation in the decoder. Therefore, the second issue is to decide the transmitted parameter values on the time borders for confirming the minimum reconstruction error of matrices. For both PS and MPS, based on the criterion of least squared error, this paper proposes a generic dynamic programming method for deciding the two issues under the tradeoff of audio quality and limited bits.
Convention Paper 7731 (Purchase now)

P14-3 The Potential of High Performance Computing in Audio EngineeringDavid Moore, Jonathan Wakefield, University of Huddersfield - West Yorkshire, UK
High Performance Computing (HPC) resources are fast becoming more readily available. HPC hardware now exists for use in conjunction with standard desktop computers. This paper looks at what impact this could have on the audio engineering industry. Several potential applications of HPC within audio engineering research are discussed. A case study is also presented that highlights the benefits of using the Single Instruction, Multiple Data (SIMD) architecture when employing a search algorithm to produce surround sound decoders for the standard 5-speaker surround sound layout.
Convention Paper 7732 (Purchase now)

P14-4 Efficient Methods for High Quality Merging of Spatial Audio Streams in Directional Audio CodingGiovanni Del Galdo, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Ville Pulkki, Helsinki University of Technology - Espoo, Finland; Fabian Kuech, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Mikko-Ville Laitinen, Helsinki University of Technology - Espoo, Finland; Richard Schultz-Amling, Markus Kallinger, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Directional Audio Coding (DirAC) is an efficient technique to capture and reproduce spatial sound. The analysis step outputs a mono DirAC stream, comprising an omnidirectional microphone pressure signal and side information, i.e., direction of arrival and diffuseness of the sound field expressed in time-frequency domain. This paper proposes efficient methods to merge multiple mono DirAC streams to allow a joint playback at the reproduction side. The problem of merging two or more streams arises in applications such as immersive spatial audio teleconferencing, virtual reality, and online gaming. Compared to a trivial direct merging of the decoder outputs, the proposed methods are more efficient as they do not require the synthesis step. From this it follows the benefit that the loudspeaker setup at the reproduction side does not have to be known in advance. Simulations and listening tests confirm the absence of any artifacts and that the proposed methods are practically indistinguishable from the ideal merging.
Convention Paper 7733 (Purchase now)


Friday, May 8, 16:30 — 18:00

P15 - Hearing


P15-1 Psychoacoustics and Noise Perception Survey in Workers of the Construction SectorMarcos D. Fernández; Bálder Vitón; José Antonio Ballesteros, Samuel Quintana, Isabel González, Escuela Universitaria Politécnica de Cuenca, Universidad de Castilla-La Mancha - Cuenca, Spain
Noise levels are not enough to assess completely the influence of the noise. Therefore, psychoacoustics and perception surveys should be taken into account. The noise that the construction workers produce in their tasks is recorded with a HATS. Later, those recordings are processed to derive different parameters: spectrum, weighted equivalent levels, and the main psychoacoustics parameters. After that, a specific survey has been developed to assess the perception of such activity noises during the working time to correlate adjectives of perception with those parameters mentioned. The survey has been designed to be answered by the workers that are exposed to the noise, so that conclusions could be derived about the feelings and annoyance that the noise can cause.
Convention Paper 7734 (Purchase now)

P15-2 On the Design of Automatic Sound Classification Systems for Digital Hearing AidsEnrique Alexandre, Lorena Álvarez-Perez, Roberto Gil-Pita, Raúl Vicen-Bueno, Lucas Cuadra, University of Alcalá - Alcalá de Henares, Spain
The design of digital hearing aids able to carry out advanced functionalities (such as, for instance, classify the acoustic environment and automatically select the best amplification program for the user’s comfort) exhibits a great difficulty. Since hearing aids have to work at very low clock frequency in order to minimize power consumption and maximize life battery, the number of available instructions per second is actually very small. This enforces the design of efficient algorithms with a reduced number of instructions. In particular, this paper will focus on three extremely related topics: (1) the design of low-complexity features; (2) the use of automatic feature selection algorithms to optimize the performance of the classifier; and (3) the critical analysis of a variety of different classification algorithms, basically based on their complexity and performance and determining whether or not they are feasible to be implemented.
Convention Paper 7735 (Purchase now)

P15-3 Pruning Algorithms for Multilayer Perceptrons Tailored for Speech/Non-Speech Classification in Digital Hearing AidsLorena Álvarez, Enrique Alexandre, Manuel Rosa-Zurera, University of Alcalá - Alcalá de Henares, Spain
This paper explores the feasibility of using different pruning algorithms for multilayer perceptrons (MLPs) applied to the problem of speech/non-speech classification in digital hearing aids. A classifier based on MLPs is considered the best option in spite of its presumably high computational cost. Nevertheless, its implementation has been proven to be feasible: it requires some trade-offs involving a balance between reducing the computational demands (that is, the number of neurons) and the quality perceived by the user. In this respect, this paper will focus on the design of three novel pruning algorithms for MLPs, which attempt to converge to the minimum complexity network (that is, the lowest number of neurons in the hidden layer) without degrading the performance of it. The results obtained with the proposed algorithms will be compared with those obtained when using another pruning algorithm proposed in the literature.
Convention Paper 7736 (Purchase now)

P15-4 Evolutionary Optimization for Hearing Aids of Computational Auditory Scene AnalysisAnton Schlesinger, Marinus M. Boone, Technical University of Delft - Delft, The Netherlands
Computational auditory scene analysis (CASA) provides an excellent means to improve speech intelligibility in adverse acoustical situations. In order to utilize algorithms of CASA in hearing aids, sets of algorithmic parameters need to be adjusted to the individual auditory performance of the listener and the acoustic scene in which they are employed. Performed manually, the optimization is an expensive procedure. We therefore developed a framework in which algorithms of CASA are automatically optimized by the principles of evolution, i.e., by a genetic algorithm. By using the speech transmission index (STI) as an objective function, the presented framework presents a holistic routine that is solely based on psychoacoustical and physiological models to improve and to assess speech intelligibility. The initial listening test revealed a discrepancy between the objective and subjective assessment of speech intelligibility, which suggests a review of the objective function. Once the objective function is in accordance with the individual perception of speech intelligibility, the presented framework could be applied in the optimization of all complex speech processors and therewith accelerate their assessment and application.
Convention Paper 7737 (Purchase now)

P15-5 Enhanced Control of On-Screen Faders with a Computer MouseMichael Hlatky, Kristian Gohlke, David Black, Hochschule Bremen (University of Applied Sciences) - Bremen, Germany; Jörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
Input devices of the audio studio that formerly were physical have mostly been converted into virtual controls on the computer screen. Whereas this transition saves space and cost, it has reduced the performance of these controls, as virtual controls adjusted using the computer mouse do not exhibit the accuracy and accessibility of their physical counterparts. Previous studies show that interaction with scrollable timelines can be enhanced by an intelligent interpretation of the mouse movement. We apply similar techniques to virtual faders as used for audio control, leveraging such approaches as controllable zoom levels and pseudo-haptic interaction. Tests conducted on five such methods provide insight into how to decouple the fader from the mouse movement to improve accuracy without impairing the speed of the interaction.
Convention Paper 7738 (Purchase now)

P15-6 Modeling of External Ear Acoustics for Insert Headphone UsageMarko Hiipakka, Miikka Tikander, Matti Karjalainen, Helsinki University of Technology - Espoo, Finland
Although acoustics of the external ear has been studied extensively for auralization and hearing aids, the acoustic behavior with insert headphones is not as well known. Our research focused on the effects of outer ear physical dimensions, particularly on sound pressure at the eardrum. The main parameter was the length of the canal, but eardrum’s damping of resonances was also studied. Ear canal simulators and a dummy head were constructed. Measurements were also performed from human ear canals. The study was carried out both with unblocked ear canals and when the canal entrance was blocked with an insert earphone. Special insert earphones with in-ear microphones were constructed for this purpose. Physics-based computational models were finally used to validate the approach.
Convention Paper 7739 (Purchase now)


Friday, May 8, 17:00 — 18:30

W11 - Mixing Sports in 5.1–Part 2


Chair:
Gerhard Stoll, IRT Munich
Panelists:
Dennis Baxter, Sound for the Olympics - USA
Beat Joss, tpc AG - Switzerland
Ales Koman, Slovenia TV - Slovenia

Abstract:
Sports has proven to be a major driving force for the introduction of HDTV into the market. Events like the Football World Championships 2006, the Football European Championships in 2008, and the 2008 Bejing Olympics provide an excellent opportunity to showcase the strengths of high resolution pictures. Teaming up with HD picture is High Definition Surround Sound and a generally more elaborate and creative sound design. The challenges are manifold:

• from capturing a roaring stadium crowd of 80,000 to hearing the details of every ball kick, the so-called “close ball”;
• from producing a surround mix to still serving your stereo-viewers with an appropriate downmix and an intelligible commentator;
• from getting your signal to and through your broadcast center unharmed to arriving at the consumer in sync with the picture;
• from making meaningful use of LFE to using the center channel not only for commentary but for the "sound of sports" as well.

A panel of experienced protagonists will discuss these issues and other challenges. Examples will give the audience the chance to judge the effectiveness themselves.

In Part 2 of this workshop a multitrack recording of one of the finals of the Swiss Ice-Hockey Championship 2009 will be mixed live for the audience to witness the approach to creating a compelling surround sound field, which includes the atmosphere of thousands of bawling, booing, and applauding fans and the “sounds of the match,” which you see on your screen.


Friday, May 8, 17:30 — 18:30

T9 - Techniques of Audio Localization for Games


Presenters:
Fabio Minazzi, Binari Sonori Srl
Francesco Zambon, Binari Sonori Srl

Abstract:
Videogaming is a new form of communication, which heavily relies on audio. The production of game soundtracks shares many technologies and techniques with the production of audio for other media. At the same time, videogames are software products, with nonlinear and dynamic behaviors, and these features of the media affect the way audio is produced and localized. This presentation reviews a set of specific audio production techniques that are applied in games localization, like the pre- and postproduction methods for localizing voices, the A/V asset management, the team that attend the local recording sessions, as well as the international quality assurance processes. Some theoretical aspects are also described, like the typical constraints that need to be accounted for during the speech voice recording phase, in order to allow the software to seamlessly load, mix, and combine audio assets coming from various countries, recorded in different times.


Friday, May 8, 19:30 — 21:30

Banquet


Abstract:
Isar Brau, Munchen Pullach

This year the Banquet will take place in a small old railway station, above the valley of the River Isar. The railway opened in 1891 and steam trains took people from the city to many beautiful places in the south of Munich. Today the steam trains have been replaced and the line is now part of the S-Bahn, so the old station is not needed anymore and has been turned into a traditional Bavarian style restaurant with its own micro-brewery. What could be more natural than making this location a pleasant place for a “get together” in a lovely atmosphere?

The welcome beer from the micro brewery and other drinks will be followed by a fine buffet with Bavarian delicacies. At the end of a long day at the Convention, these “Schmankerl” will be a good way to relax and enjoy the evening with old and new friends and colleagues. Come and savour Munich’s lifestyle. The ticket price includes all food and drinks and the bus to the restaurant and back.

55 Euros for AES members; 65 Euros for nonmembers
Tickets will be available at the Special Events desk.


Saturday, May 9, 09:00 — 10:00

Audio Recording and Mastering Systems



Saturday, May 9, 09:00 — 11:00

W12 - Reality Is Not a Recording/A Recording Is Not Reality


Chair:
Jim Anderson, New York University - New York, NY, USA

Abstract:
The former New York Times film critic, Vincent Canby, wrote "all of us have different thresholds at which we suspend disbelief, and then gladly follow fictions to conclusions that we find logical." Any recording is a "fiction," a falsity, even in its most pure form. It is the responsibility, if not the duty, of the recording engineer and producer, to create a universe so compelling and transparent that the listener isn't aware of any manipulation. Using basic recording techniques, and standard manipulation of audio, a recording is made, giving the listener an experience that is not merely logical but better than reality. How does this occur? What techniques can be applied? How does an engineer create a convincing loudspeaker illusion that a listener will perceive as a plausible reality? Recordings will be played.


Saturday, May 9, 09:00 — 11:30

P16 - Spatial Rendering–Part 1


Chair: Andreas Silzle

P16-1 An Alternative Ambisonics Formulation: Modal Source Strength Matching and the Effect of Spatial AliasingFranz Zotter, Hannes Pomberger, University of Music and Dramatic Arts - Graz, Austria; Matthias Frank, Graz University of Technology - Graz, Austria
Ambisonics synthesizes sound fields as a sum over angular (spherical/cylindrical harmonic) modes, resulting in the definition of an isotropically smooth angular resolution. This means, virtual sources are synthesized with outstanding smoothness across all angles of incidence, using discrete loudspeakers that uniformly cover a spherical or circular surface around the listening area. The classical Ambisonics approach models the fields of these discrete loudspeakers in terms of a sampled continuum of plane-waves. More accurately, the contemporary concept of Ambisonics uses a continuous angular distribution of point-sources at finite distance instead, which is considerably easier to imagine. This also improves the accuracy of holophonic sound field synthesis and the analytic description of the sweet spot. The sweet spot is a limited area of faultless synthesis emerging from angular harmonics truncation. Additionally, playback with loudspeakers causes spatial aliasing. In this sense, it allows for a successive consideration of the major shortcomings of Ambisonics: the limited sweet spot size and spatial aliasing. To elaborate on this concept this paper starts with the solution of the nonhomogeneous wave equation for a spherical point-source distribution, and ends with a novel study on spatial aliasing in Ambisonics.
Convention Paper 7740 (Purchase now)

P16-2 Sound Field Reproduction Employing Non-Omnidirectional LoudspeakersJens Ahrens, Sascha Spors, Deutsche Telekom Laboratories, Techniche Universität Berlin - Berlin, Germany
In this paper we treat sound field reproduction via circular distributions of loudspeakers. The general formulation of the approach has been recently published by the authors. In this paper we concentrate on the employment of secondary sources (i.e., loudspeakers) whose spatio-temporal transfer function is not omnidirectional. The presented approach allows us to treat each spatial mode of the secondary source’s spatio-temporal transfer function individually. We finally outline the general process of incorporating spatio-temporal transfer functions obtained from microphone array measurements.
Convention Paper 7741 (Purchase now)

P16-3 Alterations of the Temporal Spectrum in High-Resolution Sound Field Reproduction of Different Spatial BandwidthsJens Ahrens, Sascha Spors, Deutsche Telekom Laboratories, Techniche Universität Berlin - Berlin, Germany
We present simulations of the wave field reproduced by a discrete circular distribution of loudspeakers. The loudspeaker distribution is driven either with signals of infinite spatial bandwidth (as it happens in wave field synthesis), or the loudspeaker distribution is driven with signals of finite spatial bandwidth (as it is the case in near-field compensated higher order Ambisonics). The different spatial bandwidths lead to different accuracies of the desired component of the reproduced wave field and to spatial aliasing artifacts with essentially different properties. Our investigation focuses on the potential consequences of the artifacts on human perception.
Convention Paper 7742 (Purchase now)

P16-4 Cooperative Spatial Audio Authoring: Systems Approach and Analysis of Use CasesJens-Oliver Fischer, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Francis Gropengiesser, TU Ilmenau - Ilmenau, Germany; Sandra Brix, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany
Today’s audio production process is highly parallel and segregated. This is especially the case in the field of audio postproduction for motion pictures. The introduction of spatial audio systems like 5.1, 22.2 or Wave Field Synthesis results in even more production steps, namely the spatial authoring, to accomplish a rich experience for the audience. This paper proposes a system that enables the audio engineers to work together on the same project. The proposed system is planned to be implemented for an existing spatial authoring software but can be utilized by any other application that organizes its data in a tree structured way. Three major use cases, i.e., Single User, Work Space, and Work Group, are introduced and analyzed.
Convention Paper 7743 (Purchase now)

P16-5 Spatial Sampling Artifacts of Wave Field Synthesis for the Reproduction of Virtual Point SourcesSascha Spors, Jens Ahrens, Deutsche Telekom Laboratories, Techniche Universität Berlin - Berlin, Germany
Spatial sound reproduction systems with a large number of loudspeakers are increasingly being used. Wave field synthesis is a reproduction technique using a large number of densely placed loudspeakers (loudspeaker array). The underlying theory, however, assumes a continuous distribution of loudspeakers. Individual loudspeakers placed at discrete positions constitute a spatial sampling process that may lead to sampling artifacts. These may degrade the perceived reproduction quality and will limit the application of active control techniques like active room compensation. The sampling artifacts for the reproduction of plane waves have already been discussed in previous papers. This paper derives the spatial sampling artifacts and anti-aliasing conditions for the reproduction of virtual point sources on linear loudspeaker arrays using wave field synthesis techniques.
Convention Paper 7744 (Purchase now)


Saturday, May 9, 09:30 — 13:30

TT7 - Bavarian Broadcast System BR Radio


Abstract:
State of the Art Broadcasting Studio in Germany

Bayerischer Rundfunk [Bavarian Broadcasting] (BR) is the public broadcasting authority for the German Freistaat (Free State) of Bavaria, with its main offices located in Munich. On3, the young brand of Bavarian Radio, presents a completely new radio and internet world for the youth of Bavaria. The contents come from the most modern broadcasting studio in Germany. The new website and the Digital Radio “on3-radio,” the live broadcast “on3-südwild” in the Bavarian television, and the music program “on3-start ramp” are produced and broadcast from a specially designed, multimedia studio environment. The heart of the on3-Studios is an entertainment area, suited for live acts as well as for radio and television recordings. Young people, who attach importance to sound journalism and music offerings outside of the mainstream value will be addressed. They are invited to inform and to participate on the innovative multimedia platform for listening radio and downloading audio and videos. First time listeners may arrange their own personal radio program in public service quality according to their own wishes. www.on3-radio.de Furthermore you will see the big recording studio of the BR-Symphonic Orchestra with its modern control room.


Price: EUR 20

Saturday, May 9, 09:30 — 13:30

TT8 - Bavarian Broadcast System BR Television


Abstract:
On this tour you will see the postproduction facility including various transfer-rooms, sound design suites, and dubbing stages where every kind of production for BR-TV happens. We will visit a large TV-studio, whose control rooms were constructed in 2006, in use for live program. By default it was configured for 5.1 productions without any effort. Equipped with a StageTec AURUS mixing console it is used both as a music studio and as a distributing center during big events with a lot of venues. During the “UEFA European (Football) Championship” separate signals from stadiums in Austria and Switzerland are mixed as a 5.1 transmission for the German TV ARD.


Price: EUR 20

Saturday, May 9, 10:00 — 11:00

High Resolution Audio



Saturday, May 9, 11:30 — 13:00

W13 - AES 42 Digital Microphones


Chair:
Gregor Zielinsky, Sennheiser
Panelists:
Malgorzata Albinska-Frank
Stephan Flock, Direct Out
Stephan Peus, Neumann
Helmut Wittek, Schoeps

Abstract:
The panel will discuss daily work with digital microphones and their peripheral devices.

While the first digital microphones were launched a few years ago, their use is still within an exclusive community of users.

During the last two years, prices of digital microphones have dropped, while the choice of mics—as well as choices of interfaces—have risen very strongly. More and more companies are incorporating the AES 42 standard.

In this panel manufacturers and users discuss possibilities and workflows of digital microphones. Experienced users will give their view on the AES 42 standard.


Saturday, May 9, 12:00 — 13:00

Studio Practices and Production



Saturday, May 9, 13:00 — 14:00

Audio Forensics



Saturday, May 9, 13:30 — 17:30

TT9 - MSM Production Studios


Abstract:
msm-studios can look back on a long successful history as a premium service provider for mastering, postproduction, and media creation. We consider ourselves co-creators of your media output, in both a technical and an artistic sense. Especially in the final stages prior to industrial replication, experience, know-how, and continuous technical development are critical factors for productions of the highest level. This expertise is reflected in relationships of trust built up over many years with our customers.


Price: EUR 20

Saturday, May 9, 13:30 — 17:30

TT10 - Staatsoper München


Abstract:
Munich’s “first” opera house, the Nationaltheater, shows its audio equipment. One studio for live-sound, one for production and broadcast, and one for recording are integrated in a digital network. Additionally, the back-stage installations will be shown.


Price: EUR 20

Saturday, May 9, 13:30 — 15:30

W14 - 5.1 High Profile Mixing


Co-chairs:
Akira Fukada, Senior Engineer, NHK - Tokyo, Japan
Ulrike Schwarz, Engineer, Bavarian Radio - Munich, Germany
Panelists:
Jean-Marie Geijsen, Director & Balance Engineer, Polyhymnia - Al Baarn, The Netherlands
Sascha Paeth, Owner/Engineer, Gate Studios - Wolfsburg, Germany
Ronald Prent, Residential Surround Engineer, Galaxy Studios - Mol, Belgium

Abstract:
It is very interesting how the perception of music can be altered by a mixing engineer. A conductor or a musician changes the figure of written music, the composer's work, by his or her interpretation and expression. For the recording of music it is rather important what the music conveys to the engineer. In the process of recording and mixing the engineer will approach and embrace the music like an artist or musician.

In this workshop engineers who have various cultural and musical backgrounds present their different mixing results. What did each engineer consider and what did they aim at? We believe that considering the result is a very important element for understanding music and art.


Saturday, May 9, 14:00 — 15:00

Perception and Subjective Evaluation of Audio Signals



Saturday, May 9, 14:00 — 15:00

W15 - War on Music


Presenter:
Thomas Lund, tc electronic

Abstract:
Music is currently under fire from two sides: Hyper-level and data reduction.

Using theory as well as listening examples, the session will demonstrate how this is a lethal combination. 16 bit audio recorded 20 years ago using less perfect conversion and processing equipment than today easily end up sounding better than modern recordings.

Based on years of research, some of the significant "quality bottlenecks" of today's production and delivery system are identified, and ways of improving on the situation are suggested.


Saturday, May 9, 15:00 — 16:00

Electro Magnetic Compatibility



Saturday, May 9, 15:00 — 18:30

P21 - Spatial Rendering–Part 2


Chair: Sascha Spors, Technical University of Berlin - Berlin, Germany

P21-1 Score File Generators for Boids-Based Granular Synthesis in CsoundEnda Bates, Dermot Furlong, Trinity College - Dublin, Ireland
In this paper we present a set of score file generators and granular synthesis instruments for the Csound language. The applications use spatial data generated by the Boids flocking algorithm along with various user-defined values to generate score files for grainlet additive synthesis, granulation, and glisson synthesis instruments. Spatialization is accomplished using Higher Order Ambisonics and distance effects are modeled using the Doppler Effect, early reflections, and global reverberation. The sonic quality of each synthesis method is assessed and an original composition by the author is presented.
Convention Paper 7761 (Purchase now)

P21-2 Acoustical Rendering of an Interior Space Using the Holographically Designed Sound ArrayWan-Ho Cho, Jeong-Guon Ih, KAIST - Daejeon, Korea
It was reported that the filter for the acoustic array can be inversely designed in a holographic way, which was demonstrated in a free-field. In this study the same method using the boundary element method (BEM) was employed to render the interior sound field in an acoustically desired fashion. Because the inverse BEM technique can deal with arbitrary shaped source or bounding surfaces, one can simultaneously consider the effect of irregular radiation surface and reflection boundaries having impedances such as walls, floor, and ceiling. To examine the applicability, a field rendering example was tested to control the relative spatial distribution of sound pressure in the enclosed field.
Convention Paper 7762 (Purchase now)

P21-3 Validation of a Loudspeaker-Based Room Auralization System Using Speech Intelligibility MeasuresSylvain Favrot, Jörg M. Buchholz, Technical University of Denmark - Lyngby, Denmark
A novel loudspeaker-based room auralization (LoRA) system has been proposed to generate versatile and realistic virtual auditory environments (VAEs) for investigating human auditory perception. This system efficiently combines modern room acoustic models with loudspeaker auralization using either single loudspeaker or high-order Ambisonics (HOA) auralization. The LoRA signal processing of the direct sound and the early reflections was investigated by measuring the speech intelligibility enhancement by early reflections in diffuse background noise. Danish sentences were simulated in a classroom and the direct sound and each early reflection were either auralized with a single loudspeaker, HOA or first-order Ambisonics. Results indicated that (i) absolute intelligibility scores are significantly dependent on the reproduced technique and that (ii) early reflections reproduced with HOA provide a similar benefit on intelligibility as when reproduced with a single loudspeaker. It is concluded that speech intelligibility experiments can be carried out with the LoRA system either with the single loudspeaker or HOA technique.
Convention Paper 7763 (Purchase now)

P21-4 Low Complexity Directional Sound Sources for Finite Difference Time Domain Room Acoustic ModelsAlexander Southern, Damian Murphy, University of York - York, UK
The demand for more natural and realistic auralization has resulted in a number of approaches to the time domain implementation of directional sound sources in wave-based acoustic modeling schemes such as the Finite Difference Time Domain (FDTD) method and the Digital Waveguide Mesh (DWM). This paper discusses an approach for implementing simple regular directive sound sources using multiple monopole excitations with distributed spatial positioning. These arrangements are tested along with a discussion of the characteristic limitations for each setup scenario.
Convention Paper 7764 (Purchase now)

P21-5 Binaural Reverberation Using a Modified Jot Reverberator with Frequency-Dependent and Interaural Coherence MatchingFritz Menzer, Christof Faller, Ecole Polytechnique Fédérale de Lausanne - Lausanne, Switzerland
An extension of the Jot reverberator is presented, producing binaural late reverberation where the interaural coherence can be controlled as a function of frequency such that it matches the frequency-dependent interaural coherence of a reference binaural room impulse response (BRIR). The control of the interaural coherence is implemented using linear filters outside the reverberator’s recursive loop. In the absence of a reference BRIR, these filters can be calculated from an HRTF set.
Convention Paper 7765 (Purchase now)

P21-6 Design and Limitations of Non-Coincidence Correction Filters for Soundfield MicrophonesChristof Faller, Illusonic LLC - Lausanne, Switzerland; Mihailo Kolundzija, Ecole Polytechnique Fédérale de Lausanne - Lausanne, Switzerland
The tetrahedral microphone capsule arrangement in a soundfield microphone captures a so-called A-format signal, which is then converted to a corresponding B-format signal. The phase differences between the A-format signal channels due to non-coincidence of the microphone capsules cause coloration and errors in the corresponding B-format signals and linear combinations thereof. Various strategies for designing B-format non-coincidence correction filters are compared and limitations are discussed.
Convention Paper 7766 (Purchase now)

P21-7 Generalized Multiple Sweep MeasurementStefan Weinzieri, Andre Giese, Alexander Lindau, TU Berlin - Berlin, Germany
A system identification by impulse response measurements with multiple sound source configurations can benefit greatly from time-efficient measurement procedures. An optimized method by interleaving and overlapping of multiple exponential sweeps (MESM) used as excitation signals was presented by Majdak et al. (2007). For single system identifications, however, much higher signal-to-noise ratios (SNR) can be reached with sweeps whose magnitude spectra are adapted to the background noise spectrum of the acoustical environment, as proposed by Müller & Massarani (2001). We investigated on which conditions and to what extent the efficiency of multiple sweep measurements can be increased by using arbitrary, spectrally adapted sweeps. An extension of the MESM approach toward generalized sweep spectra is presented, along with a recommended measurement procedure and a prediction of the efficiency of multiple sweep measurements depending on typical measurement conditions.
Convention Paper 7767 (Purchase now)


Saturday, May 9, 15:00 — 18:30

P22 - Microphones and Headphones


Chair: William Evans, University of Surrey - Guildford, Surrey, UK

P22-1 Frequency Response Adaptation in Binaural HearingDavid Griesinger, Consultant - Cambridge, MA, USA
The pinna and ear canals act as listening trumpets to concentrate sound pressure on the eardrum. This concentration is strongly frequency dependent, typically showing a rise in pressure of 20 dB at 3000 Hz. In addition, diffraction and reflections from the pinna substantially alter the frequency response of the eardrum pressure as a function of the direction of a sound source. In spite of these large departures from flat response, listeners usually report that a uniform pink power spectrum sounds frequency balanced, and loudspeakers are manufactured to this standard. But on close listening frontal pink noise does not sound uniform. The ear clearly uses adaptive correction of timbre to achieve these results. This paper discusses and demonstrates the properties and limits of this adaptation. The results are important for our experience of live music in halls and in reproduction of music through loudspeakers and headphones.
Convention Paper 7768 (Purchase now)

P22-2 Concha Headphones and Their Coupling to the EarLola Blanchard, Bang & Olufsen ICEpower s/a - Lyngby, Denmark; Finn T. Agerkvist, Technical University of Denmark - Lyngby, Denmark
The purpose of the study is to obtain a better understanding of concha headphones. Concha headphones are the small types of earpiece that are placed in the concha. They are not sealed to the ear and therefore, there is a leak between the earpiece and the ear. This leak is the reason why there is a significant lack of bass when using such headphones. This paper investigates the coupling between the headphone and the ear, by means of measurement in artificial ears and models. The influence of the back volume is taken into account.
Convention Paper 7769 (Purchase now)

P22-3 Subjective Evaluation of Headphone Target Frequency ResponsesGaëtan Lorho, Nokia Corporation - Finland
The effect of headphone frequency response equalization on listeners’ preference was studied for music and speech reproduction. The high-quality circum-aural headphones selected for this listening experiment were first equalized to produce a flat frequency response. Then, a set of filters was created based on two parameters defining the amplitude and center frequency of the main peak found around 3 kHz in the free-field and diffuse-field equalization curves. Two different listening tests were carried out to evaluate these equalization candidates using a different methodology and a total of 80 listeners. The results of this study indicate that a target frequency response with a 3 kHz peak of lower amplitude than in the diffuse-field response is preferred by listeners for both music and speech.
Convention Paper 7770 (Purchase now)

P22-4 Study and Consideration on Symmetrical KEMAR HATS Conforming to IEC60959Kiyofumi Inanaga, Homare Kon, Sony Corporation - Tokyo, Japan; Gunnar Rasmussen, Per Rasmussen, G.R.A.S. Sound & Vibration A/S - Holte, Denmark; Yasuhiro Riko, Riko Associates - Yokohama, Japan
KEMAR is widely recognized as a leading model of head and torso simulators (HATS) for different types of acoustic measurements meeting requirements of a global industrial standard, ANSI S3.36/ASA58-1985 and IEC 60959:1990. One of the KEMAR HATS pinna models has a reputation for good reproducibility of measured results in examining headphones and earphones. However, it requires free filed compensation in order to conduct the measurements; thus, the head-related transfer function (HRTF) of HATS fitted with the pinna model must be corrected. Because headphones and earphones are usually designed symmetrically, we developed a prototype of Symmetrical KEMAR HATS based on the original KEMAR mounted with the pinna model with good reproducibility. We measured and evaluated a set of HRTFs from the sound source to both ears. Our study concluded that the HATS we developed carries symmetrical characteristics and is also suitable to be utilized as a tool to measure the qualities of variety of acoustic devices along with the conventional KEMAR and it can serve as a new common platform for different types of electroacoustic measurements.
Convention Paper 7771 (Purchase now)

P22-5 Spatio-Temporal Gradient Analysis of Differential Microphone ArraysMihailo Kolundzija, Christof Faller, Ecole Polytechnique Fédérale de Lausanne - Lausanne, Switzerland; Martin Vetterli, Ecole Polytechnique Fédérale de Lausanne - Lausanne, Switzerland, University of California at Berkeley, Berkeley, CA, USA
The literature on gradient and differential microphone arrays makes a distinction between the two, and nevertheless shows how both types can be used to obtain the same response. A more theoretically sound rationale for using delays in differential microphone arrays has not yet been given. This paper presents a gradient analysis of the sound field viewed as a spatio-temporal phenomenon, and gives a theoretical interpretation of the working principles of gradient and differential microphone arrays. It shows that both types of microphone arrays can be viewed as devices for approximately measuring the spatio-temporal derivatives of the sound field. Furthermore, it also motivates the design of high-order differential microphone arrays using the aforementioned spatio-temporal gradient analysis.
Convention Paper 7772 (Purchase now)

P22-6 The Analog Microphone Interface and its HistoryJoerg Wuttke, Joerg Wuttke Consultancy - Pfinztal, Germany, Schoeps GmbH, Karlsruhe, Germany
The interface between microphones and microphone inputs has special characteristics and requires special attention. The low output levels of microphones and the possible need for long cables have made it necessary to think about noise and interference of all kinds. A microphone input is also the electrical load for a microphone and can have an adverse influence on the its performance. Condenser microphones contain active circuitry that required some form of powering. With the introduction of transistorized circuitry in the 1960s, it became practical for this powering to be incorporated into microphone inputs. Various methods appeared in the beginning; 48-Volt phantom powering is now the dominant standard, but this standard method is still not always implemented correctly.
Convention Paper 7773 (Purchase now)

P22-7 Handling Noise Analysis in Large Cavity Microphone Windshields—Improved SolutionPhilippe Chenevez, CINELA - Paris, France
Pressure gradient microphones are well known to be highly sensitive to vibrations. Respectable suspensions are made to create the best isolation possible, but when the microphone is placed inside a large cavity windshield, the external skin behaves as a drum excited by the vibrations of the support (boom or stand). As a consequence structure-borne noise is also transmitted acoustically to the microphone, due to its hard proximity effect. Some theoretical aspects and practical measurements are presented, in conjunction with a proposed improved solution.
Convention Paper 7774 (Purchase now)


Saturday, May 9, 15:30 — 17:00

Career/Job Fair


Abstract:
The Career Fair will feature several companies from the exhibit floor. All attendees of the convention, students and professionals alike, are welcome to come talk with representatives from the companies and find out more about job and internship
opportunities in the audio industry. Bring your resume!


Saturday, May 9, 16:00 — 18:30

W17 - Sound Design for Die Fälscher


Presenters:
Tobi Fleig, Dubbing Mixer
Rainer Heesch, Sound Designer
Tatjana Jakob, Sound Designer
Olaf Mierau, Postproduction Sound Supervisor

Abstract:
Die Fälscher (The Counterfeiters) was awarded the Oscar in 2008 for the best non-English-speaking film. Key members of the sound team will discuss the concepts of sound designing this movie, communication with the director, freedom and constraints, sound as a strong esthetic means for the narrative, and other issues. Several examples from the film will be shown, sometimes in different versions and stages of the mix.


Saturday, May 9, 16:30 — 18:00

P23 - Psychoacoustics and Perception


P23-1 Influence of the Listening Room in the Perception of a Musical WorkNelia Valverde, Marcos D. Fernández, José Antonio Ballesteros, Leticia Martínez, Samuel Quintana, Isabel González, Escuela Universitaria Politénica de Cuenca - Cuenca, Spain
The listening of the same musical composition generates a unique perception for every listener but, simultaneously, the specific acoustic conditions of the chosen room have a decisive influence on the perception. In order to evaluate such differences depending on the listening room, a musical work for choir has been composed and recorded with a HATS in an anechoic room, in a reverberant room, and in a normal room. With those records, surveys to professional musicians and non-expert listeners have been carried out, once they have previously heard the recording with headphones, and finally, the answers obtained have been evaluated in order to determine the influence of the listening room in the perception of the musical work.
Convention Paper 7775 (Purchase now)

P23-2 Comparison of Methods for Measuring Sound Quality through HATS and Binaural MicrophonesJosé Antonio Ballesteros, Marcos D. Fernández, Samuel Quintana, Isabel González, Laura Rodríguez, Escuela Universitaria Politécnica de Cuenca, Universidad de Castilla-La Mancha - Cuenca, Spain
Sound quality techniques are currently becoming more important as they take into account the human perception of sound. By now, there is no well established international standards for measuring sound quality and no well recognizable reference index for its assessment. Then, a HATS or a pair of binaural microphones can be used for measuring the typical sound quality parameters. A set of measurements, under the same condition, has been carried out using both devices for assessing the differences and the possible variation in the results. As a consequence of all of this, guidance is given for choosing the device that best fits depending on each measurement context.
Convention Paper 7776 (Purchase now)

P23-3 Improving Perceived Tempo Estimation by Statistical Modeling of Higher-Level Musical DescriptorsChing-Wei Chen, Markus Cremer, Kyogu Lee, Peter DiMaria, Ho-Hsiang Wu, Gracenote, Inc. - Emeryville, CA, USA
Conventional tempo estimation algorithms generally work by detecting significant audio events and finding periodicities of repetitive patterns in an audio signal. However, human perception of tempo is subjective and relies on a far richer set of information, causing many tempo estimation algorithms to suffer from octave errors, or “double/half-time” confusion. In this paper we propose a system that uses higher-level musical descriptors such as mood to train a statistical model of perceived tempo classes, which can then be used to correct the estimate from a conventional tempo estimation algorithm. Our experimental results show reliable classification of perceived tempo class, as well as a significant reduction of octave errors when applied to an array of available tempo estimation algorithms.
Convention Paper 7777 (Purchase now)

P23-4 Perceptually-Motivated Audio Morphing: SoftnessDuncan Williams, Tim Brookes, University of Surrey - Guildford, Surrey, UK
A system for morphing the softness and brightness of two sounds independently from their other perceptual or acoustic attributes was coded. The system is an extension of a previous one that morphed brightness only, that was based on the Spectral Modeling Synthesis additive/residual model. A Multidimensional Scaling analysis, of listener responses to paired comparisons of stimuli generated by the morpher, showed movement in three perceptually-orthogonal directions. These directions were labeled in a subsequent verbal elicitation experiment that found that the effects of the brightness and softness controls were perceived as intended. A Timbre Morpher, adjusting additional timbral attributes with perceptually-meaningful controls, can now be considered for further work.
Convention Paper 7778 (Purchase now)

P23-5 Resolution of Spatial Distribution Perception with Distributed Sound Source in Anechoic ConditionsOlli Santala, Ville Pulkki, Helsinki University of Technology - Espoo, Finland
The resolution of directional perception of spatially distributed sound sources was investigated with a listening test in an anechoic chamber using various sound source distributions. Fifteen loudspeakers were used to produce test cases that included sound sources with varying widths and wide sound sources with gaps in the distribution. The subjects were asked to distinguish which loudspeakers emitted sound according to their own perception. Results show that small gaps in the sound source were not perceived accurately and wide sound sources were perceived narrower than they actually were. The results also indicate that the resolution for fine spatial details was worse than 15 degrees when the sound source was wide.
Convention Paper 7779 (Purchase now)

P23-6 Perceived Roughness—A Recent Psychoacoustic MeasurementRobert Mores, Thorsten Smit, Jana-Marie Wiese, University of Applied Science - Hamburg, Germany
This paper relates to an investigation on perceived roughness from Aures in 1984 where findings are based on psychoacoustic tests with synthetic sounds and a small group of people. The related results have repeatedly been used for modeling roughness perception since then, for instance in the context of noise perception. Roughness is again an issue when investigating the perceived quality or timbre of musical sounds. In this context roughness is one among some ten mid-level features to be extracted. Here, perceived roughness is measured again, but on a wider basis than in the earlier investigation. This paper outlines the psychoacoustic investigation, basically following the method of Aures, but modifying some of the issues under question. The results are reasonable and differ from the earlier findings in various aspects.
Convention Paper 7780 (Purchase now)

P23-7 A Physiological Auditory ModelVáclav Vencovsky, Czech Technical University in Prague - Prague, Czech Republic
A physiological auditory model is described. The model simulates a processing of a sound by an outer, middle, and inner ear. A nonlinear inner ear model comprises the cochlear frequency selectivity model and the inner hair cells model proposed according to mammalian physiological data. A capability of the auditory model to simulate human psychophysical masking data is verified.
Convention Paper 7781 (Purchase now)


Saturday, May 9, 18:30 — 19:00

Live Concert


Abstract:
The band featured in the Live Sound Workshop LS2, Rauschenberger, will continue to play after the workshop finishes, in a concert open to all attendees.

The band "Rauschenberger" is a new upcoming group from Hannover around singer and leader Rauschenberger, who has a splendid and very characteristic voice.


Sunday, May 10, 09:00 — 11:30

W18 - Blu-ray—The Next (And Only?) Chance for High Resolution Music Media


Presenters:
Morten Lindberg, 2L Oslo
Johannes Müller, msm Mastering Munich
Ronald Prent, Freelance Engineer, Galaxy Studios

Abstract:
The concept of utilizing Blu-ray as a pure audio format will be explained, and Blu-ray will be positioned as successor to both SACD and DVD-A. The operational functionality and a double concept of making it usable both with and without screen will be demonstrated by showing a few products that are already on the market.


Sunday, May 10, 09:00 — 11:30

W19 - Audio over IP


Chair:
Heinz-Peter Reykers, WDR Köln
Panelists:
Joost Bloemen, Technica Del Arte BV
Jeremy Cooperstock, McGill University - Montreal, Quebec, Canada
Gerald List, TransTel Communications GmbH
Peter Stevens, BBC R&D

Abstract:
In radio broadcasting most transmissions and communication between remote broadcast venues and the radio house are based on ISDN lines. Within TV the communication lines are also based on ISDN circuits. But the technology of ISDN networks is gradually being exchanged by IP networks and packet-based transmission. What are the consequences of this migration for the broadcasters? In this workshop the focus will be on practical demonstrations. The topics range from SIP-Servers and infrastructure issues to audio examples regarding latency and other related parameters. Quality of Service and current problems as well as possible solutions will be examined.


Sunday, May 10, 09:00 — 12:30

P24 - Assessment and Evaluation


Chair: Gaëtan Lorho

P24-1 Influence of Level Setting on Loudspeaker Preference RatingsVincent Koehl, Mathieu Paquier, Université de Brest - Plouzané, France
The perceived audio quality of a sound-reproduction device such as a loudspeaker is hard to evaluate. Industrial and academic researchers are still focusing on the design of reliable assessment procedures to measure this subjective character. One of the main issues of listening tests is about their validity in regard to real comparison situations (Hi-Fi magazine evaluations, audiophile, sound engineer, customer, etc.). Are the conclusions of laboratory tests consistent with these almost informal comparisons? As an example, one of the main differences between listening tests and real-life comparisons is about the loudness matching. This paper is aimed at comparing paired-comparison tests that are commonly accomplished under laboratory conditions with a procedure assumed to be closer to real-life conditions. It shows that differences in the test procedures led to differences in the subjective assessments.
Convention Paper 7782 (Purchase now)

P24-2 Comparing Three Methods for Sound Quality Evaluation with Respect to Speed and AccuracyFlorian Wickelmaier, Nora Umbach, Konstantin Sering, University of Tübingen - Tübingen, Germany; Sylvain Choisel, Bang & Olufsen A/S - Struer, Denmark, now at Philips Consumer Lifestyle, Leuven, Belgium
The goal of the present study was to compare three response-collection methods that may be used in sound quality evaluation. To this end, 52 listeners took part in an experiment where they assessed the audio quality of musical excerpts and six processed versions thereof. For different types of program material, participants performed (a) a direct ranking of the seven sound samples, (b) pairwise comparisons, and (c) a novel procedure, called ranking by elimination. The latter requires subjects on each trial to eliminate the least preferred sound; the elimination continues until only the sample with the highest audio quality is left. The methods are compared with respect to the resulting ranking/scaling and the time required to obtain the results.
Convention Paper 7783 (Purchase now)

P24-3 Reference Units for the Comparison of Speech Quality Test ResultsNicolas Côté, Ecole Nationale d’Ingénieurs de Brest - Plouzané, France, Deutsche Telekom Laboratories, Berlin, Germany; Vincent Koehl, Université de Brest - Plouzané, France; Valérie Gautier-Turbin, France Telecom R&D - Lannion, France; Alexander Raake, Sebastian Möller, Deutsche Telekom Laboratories - Berlin, Germany
Subjective tests are carried out to assess the quality of an entity as perceived by a user. However, several characteristics inherent to the subject or to the test methodology might influence the users’ judgments. As a result, reference conditions are usually included in subjective tests. In the field of quality of transmitted speech, reference conditions correspond to a speech sample impaired by a known amount of degradation. In this paper several kinds of reference conditions and the process used for their production are presented. Examples of the corresponding normalization procedure of each kind of reference are given.
Convention Paper 7784 (Purchase now)

P24-4 The Influence of Sound Processing on Listeners’ Program Choice in Radio BroadcastingHans-Joachim Maempel, Fabian Gawlik, Technische Universität Berlin - Berlin, Germany
Many opinions on broadcast sound processing are founded on tacit assumptions about certain effects on listeners. These, however, have lacked support by internally and ecologically valid empirical data so far. Thus, under largely realistic conditions it has been experimentally investigated to what extent broadcast sound processing influences listeners’ program choice. Technical features of stimuli, socio-demographic data of the test persons, and data of listening conditions have been additionally collected. In the main experiment, subjects were asked to choose one out of six audio stimuli varied in content and sound processing. The varied sound processing caused marginal and statistically not significant differences in frequencies of program choice. By contrast, a subsequent experiment enabling a direct comparison of different sound processings of the same audio content yielded distinct preferences for certain sound processings.
Convention Paper 7785 (Purchase now)

P24-5 Free Choice Profiling and Natural Grouping as Methods for the Assessment of Emotions in Musical Audio SignalsSebastian Schneider, Florian Raschke, Ilmenau University of Technology - Ilmenau, Germany; Gabriel Gatzsche, Fraunhofer Institute for Digital Media Technology, IDMT - Ilmenau, Germany; Dominik Strohmeier, Ilmenau University of Technology - Ilmenau, Germany
To measure the perceived emotions caused by musical audio signals we propose to use “Free Choice Profiling” (FCP) combined with “Natural Grouping” (NG). FCP/NG—originally derived from food research and new to the research of music perception—allow participants to evaluate stimuli using their own vocabulary. To evaluate the proposed methods, we conducted an experiment where 16 participants had to assess major-major and minor-minor chord pairs. Unlike one could expect, allowing participants to express themselves freely does not lead to a degeneration of the quality of the data. Instead, clearly interpretable results consistent with music theory and emotional psychology were obtained. These results encourage further investigations, which could lead to a general method for assessing emotions in music.
Convention Paper 7786 (Purchase now)

P24-6 Subjective Quality Evaluation of Audio Streaming Applications on Absolute and Paired Rating ScalesBernhard Feiten, Alexander Raake, Marie-Neige Garcia, Ulf Wüstenhagen, Jens Kroll, Deutsche Telekom Laboratories - Berlin, Germany
In the context of the development of a parametric model for the quality assessment of audiovisual IP-based multimedia applications, audio tests have been carried out. The test method used for the subjective audio tests was aligned to the method used for video tests. Hence, the Absolute Category Ranking (ACR) method was applied. To prove the usability of ACR tests for this purpose MUSHRA and ACR were applied in parallel listening tests. The MPEG audio codecs AAC, HE-AAC, MP2, and MP3 at different bitrates and different packet loss conditions were evaluated. The test results show that the ACR method also reveals the quality differences for higher qualities, even though MUSHRA has superior resolution.
Convention Paper 7787 (Purchase now)

P24-7 Assessor Selection Process for Multisensory ApplicationsSøren Vase Legarth, Nick Zacharov, DELTA SenseLab - Hørsholm, Denmark
Assessor panels are used to perform perceptual evaluation tasks in the form of listening and viewing tests. In order to ensure the quality of collected data it is vital that the selected assessors have the desired qualities in terms of discrimination aptitude as well as consistent rating ability. This work extends existing procedures in this field to provide a statistically robust and effcient manner for assessing and evaluating the performance of assessors for listening and viewing tasks.
Convention Paper 7788 (Purchase now)


Sunday, May 10, 09:00 — 12:30

P25 - Sound Design and Processing


Chair: Michael Hlatky

P25-1 Hierarchical Perceptual MixingAlexandros Tsilfidis, Charalambos Papadakos, John Mourjopoulos, University of Patras - Patras, Greece
A novel technique of perceptually-motivated signal-dependent audio mixing is presented. The proposed Hierarchical Perceptual Mixing (HPM) method is implemented in the spectro-temporal domain; its principle is to combine only the perceptually relevant components of the audio signals, derived after the calculation of the minimum masking threshold, which is introduced in the mixing stage. Objective measures are presented indicating that the resulting signals have enhanced dynamic range and lower crest factor with no unwanted artifacts, compared to the traditionally mixed signals. The overall headroom is improved, while clarity and tonal balance are preserved.
Convention Paper 7789 (Purchase now)

P25-2 Source-Filter Modeling in Sinusoid DomainWen Xue, Mark Sandler, Queen Mary, University of London - London, UK
This paper presents the theory and implementation of source-filter modeling in sinusoid domain and its applications on timbre processing. The technique decomposes the instantaneous amplitude in a sinusoid model into a source part and a filter part, each capturing a different aspect of the timbral property. We show that the sinusoid domain source-filter modeling is approximately equivalent to its time or frequency domain counterparts. Two methods are proposed for the evaluation of the source and filter, including a least-square method based on the assumption of slow variation of source and filter in time, and a filter bank method that models the global spectral envelope in the filter. Tests show the effectiveness of the algorithms for isolation frequency-driven amplitude variations. Example applications are given to demonstrate the use of the technique for timbre processing.
Convention Paper 7790 (Purchase now)

P25-3 Analysis of a Modified Boss DS-1 Distortion PedalMatthew Schneiderman, Mark Sarisky, University of Texas at Austin - Austin, TX, USA
Guitar players are increasingly modifying (or paying someone else to modify) inexpensive mass-produced guitar pedals into boutique units. The Keeley modification of the Boss DS-1 is a prime example. In this paper we compare the measured and perceived performance of a Boss DS-1 before and after applying the Keeley All-Seeing-Eye and Ultra mods. This paper sheds light on psychoacoustics, signal processing, and guitar recording techniques in relation to low fidelity guitar distortion pedals.
Convention Paper 7791 (Purchase now)

P25-4 Phase and Amplitude Distortion Methods for Digital Synthesis of Classic Analog WaveformsJoseph Timoney, Victor Lazzarini, Brian Carty, NUI Maynooth - Maynooth, Ireland; Jussi Pekonen, Helsinki University of Technology - Espoo, Finland
An essential component of digital emulations of subtractive synthesizer systems are the algorithms used to generate the classic oscillator waveforms of sawtooth, square, and triangle waves. Not only should these be perceived to be authentic sonically, but they should also exhibit minimal aliasing distortions and be computationally efficient to implement. This paper examines a set of novel techniques for the production of the classic oscillator waveforms of analog subtractive synthesis that are derived from using amplitude or phase distortion of a mono-component input waveform. Expressions for the outputs of these distortion methods are given that allow parameter control to ensure proper bandlimited behavior. Additionally, their implementation is demonstrably efficient. Last, the results presented illustrate their equivalence to their original analog counterparts.
Convention Paper 7792 (Purchase now)

P25-5 Soundscape Attribute IdentificationMartin Ljungdahl Eriksson, Jan Berg, Luleå University of Technology - Luleå, Sweden
In soundscape research, the field’s methods can be employed in combination with approaches involving sound quality attributes in order to create a deeper understanding of sound images and soundscapes and how these may be described and designed. The integration of four methods are outlined, two from the soundscape domain and two from the sound engineering domain.
Convention Paper 7793 (Purchase now)

P25-6 SonoSketch: Querying Sound Effect Databases through PaintingMichael Battermann, Sebastian Heise, Hochschule Bremen (University of Applied Sciences) - Bremen, Germany; Jörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
Numerous techniques support finding sounds that are acoustically similar to a given one. It is hard, however, to find a sound to start the similarity search with. Inspired by systems for image search that allow drawing the shape to be found, we address quick input for audio retrieval. In our system, the user literally sketches a sound effect, placing curved strokes on a canvas. Each of these represents one sound from a collection of basic sounds. The audio feedback is interactive, as is the continuous update of the list of retrieval results. The retrieval is based on symbol sequences formed from MFCC data compared with the help of a neural net using an editing distance to allow small temporal changes.
Convention Paper 7794 (Purchase now)

P25-7 Generic Sound Effects to Aid in Audio RetrievalDavid Black, Sebastian Heise, Hochschule Bremen (University of Applied Sciences) - Bremen, Germany; Jörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
Sound design applications are often hampered because the sound engineer must either produce new sounds using physical objects, or search through a database of sounds to find a suitable sample. We created a set of basic sounds to mimic these physical sound-producing objects, leveraging the mind's onomatopoetic clustering capabilities. These sounds, grouped into onomatopoetic categories, aid the sound designer in music information retrieval (MIR) and sound categorization applications. Initial testing regarding the grouping of individual sounds into groups based on similarity has shown that participants tended to group certain sounds together, often reflecting the groupings our team constructed.
Convention Paper 7795 (Purchase now)


Sunday, May 10, 09:30 — 13:30

TT11 - ARRI-Production Studios


Abstract:
Sound has an important place in the full service offering from ARRI Film and TV. State of the art technology ensures sound designs of the highest quality. Three modern mixing and screening studios, 2 recording studios, and 9 sound editing suites are available. Experience the spatial and architectural acoustics of the large mixing studios—designed, built, and presented by the architectural and engineering company “Concept-A.” The technology of global standards of film mixing studios is impressively shown by step by step mixing and dubbing a showreel from films, which were edited at ARRI. A compilation of highlights from ARRI sound mixings must not be missed. Additionally, a noise maker demonstrates his acoustic world. Finally you will enjoy the restoration of an old movie without losing its original character.


Price: EUR 20

Sunday, May 10, 09:30 — 13:30

TT12 - Bavaria-Studios Cinepostproduction


Abstract:
The CinePostproduction sound department offers the full range of services from editing, sound design, and sound mixing in all formats for feature films, TV movies, and series. The first Dolby digital mixing in Europe—for Stalingrad—was carried out in the sound studios at the Bavaria Film studios at Geiselgasteig, Munich, which belongs to CinePostproduction Bavaria Bild & Ton since 1999. Since 1987 internationally renowned Chief Re-recording Mixer Michael Kranz has worked—initially alongside Oscar nominee Milan Bor—with directors such as Jean-Jacques Annaud, Jean-Jacques Beineix, Volker Schlöndorff, Bernd Eichinger, and Til Schweiger. He has now mixed more than 100 feature films, including Resident Evil with over 350 sound tracks. The Munich sound department offers two fully compatible digital studios, a facility that is unique in Germany. Within a short time frame, a film can be edited concurrently in two studios or two projects can be coordinated at the same time, as with the mixing of (T)Raumschiff Surprise and The Downfall. In addition to the two mixing studios for feature films and two TV mixing studios, there are eight Avid and eight ProTools suites, two re-recording rooms, and a THX preview cinema.


Price: EUR 20

Sunday, May 10, 12:00 — 14:00

W20 - Standards-Based Audio Networks Using IEEE 802.1 AVB


Presenters:
Edward Clarke, XMOS Semiconductor - Bristol, UK
Rick Kreifeldt, Harman International - UT, USA

Abstract:
Recent work by IEEE 802 working groups will allow vendors to build a standards-based network with the appropriate quality of service for high quality audio performance and production. This new set of standards, developed by the IEEE 802.1 Audio Video Bridging Task Group, provides three major enhancements for 802-based networks: (1) Precise timing to support low-jitter media clocks and accurate synchronization of multiple streams; (2) A simple reservation protocol that allows an endpoint device to notify the various network elements in a path so that they can reserve the resources necessary to support a particular stream; and (3) Queuing and forwarding rules that ensure that such a stream will pass through the network within the delay specified by the reservation. These enhancements require no changes to the Ethernet lower layers and are compatible with all the other functions of a standard Ethernet switch (a device that follows the IEEE 802.1Q bridge specification). As a result, all of the rest of the Ethernet ecosystem is available to developers, in particular, the various high speed physical layers (up to 10 gigabit/sec in current standards, even higher speeds are in development), security features (encryption and authorization), and advanced management (remote testing and configuration) features can be used. This workshop will outline the basic protocols and capabilities of AVB networks, describe how such a network can be used, and provide some simple demonstrations of network operation (including a live comparison with a legacy Ethernet network).


Sunday, May 10, 12:00 — 14:00

W21 - Multimodal Integration: Audio-Visual-Haptic-Tactile


Co-chairs:
Durand Begault, NASA Ames Research Center
Ellen Haas, US Army Research Lab - MD, USA
Panelists:
Ercan Altinsoy, Technical University of Dresden - Dresden, Germany
Jeremy Cooperstock, McGill University - Montreal, Quebec, Canada
Gerhard Mauter, Audi AG - Germany
Alexander S. Treiber, ACC Akustik - Germany

Abstract:
To design the best human interfaces for complex systems such as automobiles, mixing consoles, shared reality spaces, games, military, and aerospace, an increasing number of designers, psychologists, and human factors specialists are reconsidering multimodal displays of information that includes haptic and tactile feedback in addition to auditory-visual communication. This workshop considers in particular the way in which various modalities of information can be best integrated for a particular application, and identifies future requirements for such research.


Sunday, May 10, 14:30 — 17:00

T10 - Audio System Grounding & Interfacing—An Overview


Chair:
Bill Whitlock

Abstract:
Although the subject has a black art reputation, this tutorial replaces myth and hype with insight and knowledge, revealing the true causes of system noise and ground loops. Although safety must be the top priority, some widely used cures are both illegal and deadly. Both balanced and unbalanced interfaces are vulnerable to noise coupling, but the unbalanced interface is exquisitely so due to an intrinsic problem. Because balanced interfaces are widely misunderstood, their theoretically perfect noise rejection is severely degraded in most real-world systems. Some equipment, because of an innocent design error, has a built-in noise problem. A simple, no-test-equipment, troubleshooting method can pinpoint the location and cause of system noise. Ground isolators in the signal path solve the fundamental noise coupling problems. Also discussed are unbalanced to balanced connections, RF interference, and power line treatments such as technical power, balanced power, isolation transformers, and surge suppressors.


Sunday, May 10, 14:30 — 16:30

W22 - MPEG SAOC: Interactive Audio and Broadcasting, Music 2.0, Next Generation Telecommunication


Chair:
Oliver Hellmuth, Fraunhofer IIS - Erlangen, Germany
Panelists:
Jonas Engdegård, Dolby - Stockholm, Sweden
Christof Faller, Illusonic LLC - Lausanne, Switzerland
Jürgen Herre, Fraunhofer IIS - Erlangen, Germany
Werner Oomen, Philips Applied Technologies - Eindhoven, The Netherlands

Abstract:
Recently the ISO/MPEG standardization group launched an activity for bit rate-efficient and backward compatible coding of multiple sound objects that heavily exploits the human perception of spatial sound. On the receiving side, such a "Spatial Audio Object Coding" (SAOC) system renders the transmitted objects interactively into a sound scene on any desired reproduction. Based on the SAOC technology elegant solutions for Interactive Audio and Broadcasting, Music 2.0, Next Generation Telecommunication become feasible. The workshop reviews the ideas and principles behind Spatial Audio Object Coding, especially highlighting its possibilities and benefits for those new types of applications. Additionally, the potential of SAOC is illustrated by means of real-time demonstrations.