AES Berlin 2014
Paper Session P11

P11 - Spatial Audio

Tuesday, April 29, 09:00 — 12:30 (Room Paris)

Clemens Par, Swiss Audec - Morges, Switzerland

P11-1 Control of Frame Loudspeaker Array for 3-D TelevisionAkio Ando, University of Toyama - Toyama, Japan; Masafumi Fujii, University of Toyama - Toyama, Japan
To obtain a stable sound localization on the TV display, the use of a loudspeaker array set on the frame of the display may be a solution. However, the frequency response and the shape of the wave front reproduced by the array sometimes deteriorate. This is because the wave field synthesis with Rayleigh integrals may not be effective in the absence of a secondary source on the display. In this study we use the Rayleigh I integral to calculate input signals of the loudspeakers and introduce weighting coefficients for the signals to alleviate the deterioration. Error functions are defined to scale such deterioration and minimized by the simulated annealing. As the result, the frequency response and the wave surface were improved regardless of the virtual source position.
Convention Paper 9076 (Purchase now)

P11-2 Ambidio: Sound Stage Width Extension for Internal Laptop LoudspeakersTsai-Yi Wu, New York University - New York, NY, USA; Agnieszka Roginska, New York University - New York, NY, USA; Ralph Glasgal, Ambiophonics Institute - Rockleigh, NJ, USA
This paper introduces a sound stage width extension method for internal loudspeakers. Ambidio is a real-time application that enhances a stereo sound file playing on a laptop in order to provide a more immersive experience over built-in laptop loudspeakers. The method, based on Ambiophonics principles, is relatively robust to a listener's head position and requires no measured/synthesized HRTFs. The key novelty of the approach is the pre/post-processing algorithm that dynamically tracks the image spread and modifies it to fit the hardware setting in real-time. Two detailed evaluations are provided to assess the robustness of the proposed method. Experimental results show that the average perceived stage width of Ambidio is 176° using internal speakers, while keeping a relatively flat frequency response and a higher user preference rating.
Convention Paper 9077 (Purchase now)

P11-3 On Spatial-Aliasing-Free Sound Field Reproduction using Infinite Line Source ArraysFrank Schultz, University of Rostock / Institute of Communications Engineering - Rostock, Germany; Till Rettberg, University of Rostock - Rostock, Germany; Sascha Spors, University of Rostock - Rostock, Germany
Concert sound reinforcement systems aim at the reproduction of homogeneous sound fields over extended audiences for the whole audio bandwidth. For the last two decades this has been mostly approached by using so called line source arrays due to their superior abilities of producing homogeneous sound fields. Design and setup criteria for line source arrays were derived as Wavefront Sculpture Technology in literature. This paper introduces a viewpoint on the problem at hand by utilizing a signal processing model for sound field synthesis. It will be shown that the optimal radiation of a line source array can be considered as a special case of spatial-aliasing-free synthesis of a wave front that propagates perpendicular to the array. For high frequencies the so called waveguide operates as a spatial low-pass filter and therefore attenuates energy that otherwise would lead to spatial aliasing artifacts.
Convention Paper 9078 (Purchase now)

P11-4 2-D to 3-D Upmixing Based on Perceptual Band Allocation (PBA)Hyunkook Lee, University of Huddersfield - Huddersfield, UK
Listening tests were carried out to evaluate the performance of a 2-D to 3-D ambience upmixing technique based on “Perceptual Band Allocation (PBA),” which is a novel vertical image extension method. Five-channel recordings were made with a 3-channel frontal microphone array and a 4-channel ambience array in a concert hall. The 4-channel ambience signals were low- and high-pass filtered at three different crossover frequencies: 0.5 k, 1 k, and 4 kHz. For 2-D to 3-D upmixing, the low-passed signals were routed to the corresponding lower-layer loudspeakers while the high-passed ones to the upper-layer loudspeakers configured in a 9-channel Auro3D-inspired setup. Results suggested that the proposed method produced a similar or greater magnitude of perceived 3-D listener envelopment compared to an original 9-channel ambience recording as well as the original 5-channel recording, depending on the crossover frequency.
Convention Paper 9079 (Purchase now)

P11-5 Customization of Head-Related Impulse Response Via Two-Dimension Common Factor Decomposition and Sampled MeasurementsZhixin Wang, City University of Hong Kong - Kowloon, Hong Kong; Cheung Fat Chan, City University of Hong Kong - Kowloon, Hong Kong
A method based on subject-dependent impulse response extraction is proposed for the customization of head-related impulse response. In the training step, a two-dimension common factor decomposition algorithm is applied to train a set of direction-dependent impulse responses that are common for all subjects. A subject-dependent impulse response is extracted simultaneously for each subject to capture the subject-dependent information. In the customization step, the subject-dependent impulse response of a target subject is extracted from several head-related impulse response measurements of the subject. The extracted subject-dependent impulse response is then convolved with the trained direction-dependent impulses to construct all head-related impulse responses for the target subject. It is shown that with head-related impulse responses measured at a few directions for a target subject, head-related impulse responses at all trained directions can be customized with fairly low distortion.
Convention Paper 9080 (Purchase now)

P11-6 A Flexible System Architecture for Collaborative Sound Engineering in Object-Based Audio EnvironmentsGabriel Gatzsche, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Audanika GmbH; Christoph Sladeczek, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany
Object-based sound reproduction, on the one hand, allows sound engineers to interact with sound objects, not only during production but also in the reproduction venue. On the other hand object-based systems are quite complex. Multicore audio processors are used to render complex sound scenes consisting of hundreds of audio objects to be reproduced using a large number of loudspeaker channels. This results in the need for applications optimally adapted to the user. Working tasks need to be parallelized. This paper outlines a software architecture that helps to incorporate the multitude of audio processing components of an object-based spatial audio environment into a unified system. The architecture allows multiple sound engineers to access, monitor, control, and/or change these system components parameters collaboratively using wireless mobile devices.
Convention Paper 9082 (Purchase now)

P11-7 Effect of Microphone Number and Positioning on the Average of Frequency Responses in Cinema CalibrationGiulio Cengarle, Dolby Laboratories - Barcelona, Spain; Toni Mateos, Dolby Laboratories - Barcelona, Spain
When measuring the response of a loudspeaker by averaging multiple points in a room, the results typically vary according to the number of microphones employed and their positions. We present an interpretation of the average procedure that shows that averaging converges to a compromise response over the relevant listening area, at a rate inverse to the square root of the number of microphones employed. We then provide real-world examples by performing measurements in a dubbing stage and a cinema theater, and analyzing the variations of average frequency responses over a large set of different microphone number and positioning. Results confirm the predicted scaling of the deviations and quantify their magnitude in typical rooms. The data provided helped to establish the point of diminishing returns in number of microphones.
Convention Paper 9083 (Purchase now)

Return to Paper Sessions

EXHIBITION HOURS April 26th   10:00 - 18:30 April 27th   09:00 - 18:30 April 28th   09:00 - 18:30 April 29th   09:00 - 14:00
REGISTRATION DESK April 26th   09:30 - 18:30 April 27th   08:30 - 18:30 April 28th   08:30 - 18:30 April 29th   08:30 - 16:30
TECHNICAL PROGRAM April 26th   10:00 - 18:00 April 27th   09:00 - 18:00 April 28th   09:00 - 18:00 April 29th   09:00 - 17:00
AES - Audio Engineering Society