• Sessions by Industry
• Detailed Calendar
• Convention Planner
• Paper Sessions
• Master Classes
• Live Sound Seminars
• Exhibitor Seminars
• Special Events
• Student Program
• Technical Tours
• Technical Council
• Standards Committee
• Heyser Lecture
AES Amsterdam 2008
Spatial Audio Perception and Processing - 1
Paper Session P3
Saturday, May 17, 14:00 — 18:00
Chair: Gunther Theile
P3-1 Objective and Subjective Evaluation of Urban Acoustic Modeling and Auralization —Yuliya Smyrnova, Yan Meng, Jian Kang, University of Sheffield - Western Bank, Sheffield, UK
This paper presents the results of objective and subjective evaluation of a simulation and auralization system based on model CRR—combined ray-tracing and radiosity. Auralization of an urban square has been carried out with various boundary reflection patterns (purely specular, purely diffuse, and a mix of specular and diffuse) using two audio stimuli. The subjective evaluation results reveal a strong impact of sound sources and reflection pattern. Despite similarities in objective measures, there are noticeable differences in subjective attributes between signals based on simulated and measured impulse responses, but current auralization algorithms are still adequate in simulating real urban environments.
Convention Paper 7325 (Purchase now)
P3-2 Virtual vs. Actual Multichannel Acoustical Recording —Gavin Kearney, Trinity College - Dublin, Ireland; Jeff Levison, Euphonix, Inc. - Palo Alto, CA, USA
We present a comparison of live recordings of a choral ensemble versus dry recordings of the same players, with the acoustic environment reconstructed from impulse responses of the original reverberant performance space. Binaural measurements are used to objectively classify the recordings, and the perceptual attributes are investigated through a series of subjective listening tests. It is shown that the differences between dry recordings convolved with linear time-invariant (LTI) impulse responses and actual acoustical recordings can be perceived by a panel of expert listeners.
Convention Paper 7326 (Purchase now)
P3-3 Virtual Sources and Moving Targets —Glenn Dickins, David Cooper, David McGrath, Dolby Laboratories - Sydney, NSW, Australia
This paper presents an analysis of the effects of listener mobility on the stability of virtual source images created by a pair of loudspeakers. A spherical head is used to generate analytic head related transfer functions from which we create a simple perceptual localization model for the forward half of the horizontal plane. This model is then used to investigate changes in perceived source localization as the listener moves. The analysis demonstrates that even with this simple model, and the assumption of small listener movements, the source image becomes unstable at a relatively low frequency. Given that for such low frequencies the spherical head model is a reasonable approximation of measured HRTFs, this work suggests that individualized HRTF and pinnae functions are of little benefit when designing a virtualizer system that allows for some listener mobility.
Convention Paper 7327 (Purchase now)
P3-4 On the Use of Directional Loudspeakers to Create a Sound Source Close to the Listener—Aki Härmä, Steven van de Par, Werner de Bruijn, Philips Research Laboratories - Eindhoven, The Netherlands
It is sometimes desired to create an illusion that a sound source appears closer to the listener than the nearest loudspeaker location. By using highly directional loudspeakers one may manipulate the relation between direct and reverberant energy and therefore change the distance cues to make the sound source appear very close to the listener. In this paper we present a method combining highly directional sound with surround audio reproduction to produce controllable distance effects between the listener location and the nearest loudspeakers.
Convention Paper 7328 (Purchase now)
P3-5 Directional Analysis of Sound Field with Linear Microphone Array and Applications in Sound Reproduction —Jukka Ahonen, Ville Pulkki, Helsinki University of Technology - Espoo, Finland; Fabian Küch, Markus Kallinger, Richard Schultz-Amling, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
The use of a linear microphone array composed of two closely spaced omnidirectional microphones as input to teleconference application of Directional Audio Coding (DirAC) is presented. DirAC is a method for spatial sound processing, where the direction of the arrival of sound and diffuseness are analyzed and used for different purposes in reproduction. Two-dimensional plane arrays have been used so far to generate input signals for DirAC, in which case it is possible to measure directly a two-dimensional sound field. In this paper a one-dimensional linear array is used to provide input signals for one-dimensional direction and diffuseness analysis in DirAC. Listening tests are conducted to evaluate the intelligibility of speech with simultaneous talkers when the linear array is used in teleconference applications.
Convention Paper 7329 (Purchase now)
P3-6 The SoundScape Renderer: A Unified Spatial Audio Reproduction Framework for Arbitrary Rendering Methods —Matthias Geier, Jens Ahrens, Sascha Spors, Technische Universität Berlin - Berlin, Germany
The SoundScape Renderer is a versatile software framework for real-time spatial audio rendering. The modular system architecture allows the use of arbitrary rendering methods. Three rendering modules are currently implemented: Wave Field Synthesis, Vector Base Amplitude Panning, and Binaural Rendering. After a description of the software architecture, the implementation of the available rendering methods is explained and the graphical user interface is shown as well as the network interface for the remote control of the virtual audio scene. Finally, the Audio Scene Description Format, a system-independent storage file format, is briefly presented.
Convention Paper 7330 (Purchase now)
P3-7 Initial Investigation of Signal Capture Techniques for Objective Measurement of Spatial Impression Considering Head Movement —Chungeun Kim, Russell Mason, Tim Brookes, University of Surrey - Guildford, Surrey, UK
In a previous study it was discovered that listeners normally make head movements attempting to evaluate source width and envelopment as well as source location. To accommodate this finding in the development of an objective measurement model for spatial impression, two capturing models were introduced and designed in this research, based on binaural technique: 1) rotating Head And Torso Simulator (HATS), and 2) a sphere with multiple microphones. As an initial study, measurements of interaural time difference, level difference and cross-correlation made with the HATS were compared with those made with a sphere containing two microphones. The magnitude of the differences was judged in a perceptually relevant manner by comparing them with the just-noticeable differences of these parameters.
Convention Paper 7331 (Purchase now)
P3-8 A Second Order Differential Microphone Technique for Spatially Encoding Virtual Room Acoustics —Alexander Southern, Damian Murphy, University of York - Heslington, York, UK
Room acoustics modeling using a numerical simulation technique known as the Digital Waveguide Mesh (DWM) has previously been presented as a suitable method for measuring spatial Room Impulse Responses (RIR) of virtual enclosed spaces. In this paper a new method for capturing the DWM modeled soundfield using an array of spatially distributed pressure-sensitive receivers is presented. The polar response of the formed 2nd order virtual microphone is measured and compared to the theoretical polar response. This approach is proven to be capable of decomposing the modeled soundfield into second order spherical harmonic components that are typically associated with 2nd order Ambisonics.
Convention Paper 7332 (Purchase now)
Last Updated: 20080612, tendeloo