• Spotlight on Broadcasting
• Spotlight on Live Sound
• Spotlight on Archiving
• Detailed Calendar
• Convention Planner
• Paper Sessions
• Exhibitor Seminars
• Application Seminars
• Special Events
• Student Program
• Technical Tours
• Technical Council
• Standards Committee
• Heyser Lecture
Last Updated: 20070416, meiP5 - Spatial Audio Perception and Processing - 2
Saturday, May 5, 14:30 — 17:30
Chair: Gerhard Stoll, IRT - Munich, Germany
P5-1 On the Application of Sound Source Separation to Wave-Field Synthesis—Máximo Cobos, Jose López, Technical University of Valencia - Valencia, Spain
Wave-Field Synthesis (WFS) is a spatial sound system that can synthesize an acoustic field in an extended area by means of loudspeaker arrays. Spatial positioning of virtual sources is possible but requires separated signals for each source to be feasible. Despite the fact that most of the music is recorded in separated tracks for each instrument, in the stereo mix-down process this information is lost. Unfortunately, most of the existing recorded material is in stereo format. In this paper we propose to use sound source separation techniques to overcome this problem. Existing algorithms are yet far from perfect resulting in audible artifacts that clearly reduce the quality of the resynthesized sources in practice. Despite these artifacts, when separated sources are mixed again by a WFS system they are masked by other sounds. The utility of different separation algorithms and the subjective results are discussed.
Convention Paper 7016 (Purchase now)
P5-2 Reproduction of Arbitrarily Shaped Sound Sources with Wave Field Synthesis—Physical and Perceptual Effects—Marije Baalman, Technische Universität Berlin - Berlin, Germany
Current Wave Field Synthesis (WFS) implementations only allow for point sources and plane waves. In order to reproduce arbitrarily shaped sound sources with WFS several aspects need to be considered, such as the WFS-operator for source points outside of the horizontal plane, discretization of the object surface and diffraction of the sound around the sounding object itself, which can be modeled by introducing secondary sources at the edges of the object. This paper discusses those issues, describes the implementation in software and shows results of both objective and subjective evaluation.
Convention Paper 7017 (Purchase now)
P5-3 The Effect of Head Diffraction on Stereo Localization in the Mid-Frequency Range—Eric Benjamin, Phil Brown, Dolby Laboratories - San Francisco, CA, USA
In a previous paper, the present author described anomalous localization in intensity stereo at frequencies above the frequency at which the head is approximately one wavelength in diameter. Conventional analysis of stereo localization has usually depended on an asymptotic shadowless model of the head’s diffraction. Measurements of the ear signals heard by the subjects in localization experiments showed that there were large differences between what was predicted by the simple model, and what was found in actual circumstances. We present a simple model for the head’s diffraction in the range of 1200 Hz to 5000 Hz and show that it produces results which correspond more closely to real-world localization.
Convention Paper 7018 (Purchase now)
P5-4 Multiple Exponential Sweep Method for Fast Measurement of Head Related Transfer Functions—Piotr Majdak, Peter Balazs, Bernhard Laback, Austrian Academy of Sciences - Vienna, Austria
Presenting sounds in virtual environments requires filtering of free field signals with head related transfer functions (HRTF). The HRTFs describe the filtering effects of pinna, head, and torso measured in the ear canal of a subject. The measurement of HRTFs for many positions in space is a time-consuming procedure. To speed up the HRTF measurement the multiple exponential sweep method (MESM) was developed. MESM speeds up the measurement by interleaving and overlapping sweeps in an optimized way and retrieves the impulse responses of the measured systems. In this paper the MESM and its parameter optimization is described. As an example of an application of MESM, the measurement duration of an HRTF set with 1550 positions is compared to the unoptimized method. Using MESM, the measurement duration could be reduced by a factor of four without a reduction of the signal-to-noise ratio.
Convention Paper 7019 (Purchase now)
P5-5 A Fast Multipole Boundary Element Method for Calculating HRTFs—Wolfgang Kreuzer, Zhensheng Chen, Austrian Academy of Sciences - Vienna, Austria
Methods for measuring head related transfer functions (HRTFs) for an individual person are rather long and complicated. To avoid this problem a numerical model using the Boundary Element Method (BEM) is introduced. In general, such methods have the drawback that the computations for high frequencies are very time- and resource-consuming. To reduce these costs the BEM-model is combined with a fast multipole method (FMM) and a reciprocal approach.
Convention Paper 7020 (Purchase now)
P5-6 A Hybrid Artificial Reverberation Algorithm—Rebecca Stewart, Queen Mary, University of London - London, UK; Damian Murphy, University of York - York, UK
Convolution based reverberation allows for an accurate reproduction of a space, but yields no flexibility in defining that space, while filterbank-based reverberation allows computational efficiency and flexibility but lacks accuracy. A hybrid artificial reverberation algorithm that uses elements of both convolution and filterbank reverberation is investigated. An impulse response is truncated to contain only the early reflections and is convolved with input audio; the output audio then is combined with audio processed through a filterbank to simulate the late reflections. The parameters defining the filterbank are derived from the impulse response being analyzed. It is shown that this hybrid reverberator can produce a high-quality reverberation comparable to convolution reverberators.
Convention Paper 7021 (Purchase now)