AES Conventions and Conferences

   Detailed Calendar
         (in Excel)
   Calendar (in PDF)
   Convention Planner
   Surround - Live:
   Paper Sessions
   Tutorial Seminars
   Special Events
   Exhibitor Seminars
         Room A
   Exhibitor Seminars
         Room B
   Technical Tours
   Student Program
   Free Hearing
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Press Information
   Return to 115th

Monday, October 13 1:30 pm – 4:30 pm
Session Q Spatial Audio

Q-1 An Investigation of Layered SoundPeter Mapp, Peter Mapp Associates, Colchester, UK
A new technique for improving the spaciousness of reproduced sound has been investigated. The technique uses a combination of conventional Pistonic loudspeakers and Distributed Mode (DML) devices. Objective measurements have been made in a range of rooms and show that “layered sound” affects parameters such as the Inter Aural Cross Correlation (IACC) and lateral energy fraction as well as centre time and early decay time. A number of conditions were investigated including listening room configuration and the relative sound levels of the loudspeakers. Limited subjective testing was carried out in order to ascertain the preferred relative intensities between the conventional and DML loudspeakers. This was confirmed to be optimal with the DMLs set within the range –5 dB ± 3 dB relative to the conventional stereo loudspeakers. The configuration of the listening room and the type of program material (and recording technique) were also found to be significant factors. It is shown that over a range of conditions, layered sound enhances the perceived spaciousness, envelopment and clarity of reproduced sound, though some changes to the original stereo image were noted.

Q-2 Authoring System for Wave Field Synthesis Content ProductionFrank Melchior, Thomas Röder, Sandra Brix, Stefan Wabnik, Fraunhofer IIS-AEMT, Ilmenau, Germany; Christian Riegel, Tonbüro Berlin, Berlin, Germany
Wave field synthesis (WFS) permits the reproduction of a sound field, which fills nearly the whole reproduction room with correct localization and spatial impression. Because of its properties, the WFS technology shows enormous potential to be used for creation of audio in combination with motion pictures. For this application special authoring systems are being developed, which gives audio engineers the possibility to automate the process of positioning sound sources taking several ergonomics and technical features into account. This paper presents first experiences of mixing process for the content production using WFS, demonstrates the authoring system capabilities for WFS and shows future developments.

Q-3 A Sound Localizer Robust to ReverberationJosé Vieira, Luís Almeida, Universidade de Aveiro, Aveiro, Portugal
This paper proposes an intelligent acoustic sensor able to localize sound sources in acoustic environments with strong reverberation. The proposed algorithm is inspired on the precedence effect used by the human auditory system and uses only two acoustic sensors. It implements a modified version of the algorithm proposed by Huang that uses the precedence effect in order to achieve robust sound localization even in reverberant environments. The localization system was implemented in a C31 DSP for real-time demonstration and several experiments were performed showing the validity of our solution. Finally, the paper also proposes a method to estimate on-line the decay of the reverberation, using the received sound signals only.

Q-4 Modification of Loudspeaker Generated Position Cues Through Assistant HeadphonesBanu Gunel, Queen's University of Belfast, Belfast, UK
Fidelity of the spatial auditory displays generated by stereophonic loudspeakers degrades outside the sweet spot. This paper proposes a method to analyze and modify a sound scene created by stereophonic loudspeakers with the help of assistant headphones. Loudspeaker positions and the rough shape of the listening room are found by analyzing B-format recordings at the listener position. Time and level differences between the received signals are found, which are then used in generating leading, lagging, and panned virtual sounds from the original sounds through headphones. Headphone and loudspeaker signals together provide direct-echo pairs at virtual positions, creating a virtual sweet spot for the listener. The system improves listening conditions outside the sweet spot and is extendable to multiple listeners.

Q-5 Spherical Microphone Array for Spatial Sound RecordingJens Meyer, Tony Agnello, mh acoustics, Summit, NJ, USA
This paper describes a beamforming spherical microphone array consisting of many acoustic pressure sensors mounted on the surface of a rigid sphere. The beamformer is based on a spherical harmonic decomposition of the sound field. We show that this design allows a simple and computationally effective, yet very flexible beamformer structure. The spherical shape of the array in combination with the beamformer allows steering the look-direction the to any angle in 3-D space. Measurements from an array with 37.5 mm radius that consists of 24 sensors are presented. The paper focuses on the applications of directional sound pick-up and sound field analysis and reconstruction. Other suitable applications are, e.g., room acoustic measurements and forensic beamforming.

Q-6 Modeling Sound Source Localization Under the Precedence Effect Using Multivariate Gaussian MixturesHuseyin Hacihabiboglu, Queen's University Belfast, Belfast, UK
The precedence effect refers to the property of the human auditory system that enables accurate localization of sound sources where many interfering echoes of the original signal are also present. Perception of the elevation, azimuth, and distance of sound sources are affected in the presence of an echo. The multivariate Gaussian mixture model proposed in this paper combines azimuth, elevation, and distance perception, and provides a general framework for modeling sound source localization under the precedence effect. The model interprets the precedence effect as a spatial property rather than a temporal one.

Back to AES 115th Convention Back to AES Home Page

(C) 2003, Audio Engineering Society, Inc.