120th AES Convention - Paris, France - Dates: Saturday May 20 - Tuesday May 23, 2006 - Porte de Versailles

Registration
Exhibition
Calendar
4 Day Planner
Paper Sessions
Workshops
Tutorials
Exhibitor Seminars
Application Seminars
Student Program
Special Events
Technical Tours
Heyser Lecture
Tech Comm Mtgs
Standards Mtgs
Hotel Reservation
Presenters

AES Paris 2006


Home | Technical Program | Exhibition | Visitors | Students | Press
 

Last Updated: 20060403, mei

P9 - Posters: Spatial Perception and Processing

Sunday, May 21, 09:00 — 10:30

P9-1 Evaluation of a 3-D Audio System with Head TrackingJan Abildgaard Pedersen, AM3D A/S - Aalborg, Denmark, Lyngdorf Audio, Skive, Denmark; Pauli Minnaar, AM3D A/S - Aalborg, Denmark
A 3-D audio system was evaluated in an experiment where listeners had to “shoot down” real and virtual sound sources appearing from different directions around them. The 3-D audio was presented through headphones, and head tracking was used. In order to investigate the influence of head movements both long and short stimuli were used. Twenty six people participated, of which half were students and half were pilots. The results were analyzed by calculating a localization offset and a localization uncertainty. For azimuth no significant offset was found, whereas for elevation an offset was found that is strongly correlated with the stimulus elevation. The uncertainty for real and virtual sound sources was 10 degrees and 14 degrees in azimuth and 12 degrees and 24 degrees in elevation.

[Poster Presentation Associated with Paper Presentation 3-5]
Convention Paper 6654 (Purchase now)

P9-2 Design and Verification of HeadZap, a Semi-Automated HRIR Measurement SystemDurand Begault, Martine Godfroy, NASA Ames Research Center, USA - Moffett Field, CA, USA; Joel Miller, VRSonic, Inc. - San Francisco, CA, USA; Agnieskza Roginska, AuSIM Incorporated - Palo Alto, CA, USA; Mark Anderson, Elizabeth Wenzel, NASA Ames Research Center - Moffett Field, CA, USA
This paper describes the design, development, and acoustic verification of HeaddZap, a semi-automated system for measuring head-related impulse responses (HRIR) designed by AuSIM Incorporated and modified by the NASA Ames Research Center Spatial Auditory Display Laboratory. HeaddZap utilizes an array of twelve loudspeakers in order to measure 432 HRIRs at 10° intervals in both azimuth and elevation, in a nonanechoic environment. Application to real-time rendering using SLAB software, an audio-visual localization experiment, is discussed.

[Poster Presentation Associated with Paper Presentation 3-6]
Convention Paper 6655 (Purchase now)

P9-3 Direct Audio Coding: Filterbank and STFT-Based DesignVille Pulkki, Helsinki University of Technology - Espoo, Finland; Christof Faller, EPFL - Lausanne, Switzerland
Directional audio coding (DirAC) is a method for spatial sound representation, applicable to arbitrary audio reproduction methods. In the analysis part, properties of the sound field in time and frequency in a single point are measured and transmitted as side information together with one or more audio waveforms. In the synthesis part, the properties of the sound field are reproduced using separate techniques for point-like virtual sources and diffuse sound. Different implementations of DirAC are described and differences between them are discussed. A modification of DirAC is presented, which provides a link to Binaural Cue Coding and parametric multichannel audio coding in general (e.g., MPEG Surround).

[Poster Presentation Associated with Paper Presentation 3-9]
Convention Paper 6658 (Purchase now)

P9-4 Comprehensive Analysis of Loudspeaker Span Effects on Crosstalk Cancellation in Spatial Sound ReproductionMingsian R. Bai, Chih-Chung Lee, National Chiao-Tung University - Hsin-Chu, Taiwan
This paper seeks to pinpoint the optimal loudspeaker span that best reconciles the robustness and performance of the crosstalk cancellation system (CCS). Two sweet spot definitions are employed for assessment of robustness. Besides the point source model, head-related transfer functions are employed in the simulation to capture more design aspects in practical situations. Three span angles, 10 degrees, 60 degrees, and 120 degrees, are compared via subjective experiments. Analysis of Variance is applied for analysis. The results indicate that not only the CCS performance but also the panning effect and head shadowing will dictate the overall performance and robustness. The 120-degree arrangement performs comparably well as the 60-degree arrangement but is more preferred than the 10-degree arrangement.
Convention Paper 6701 (Purchase now)

P9-5 A Perceptual Measure for Assessing and Removing Reverberation from Audio SignalsThomas Zarouchas, John Mourjopoulos, Panagiotis Hatziantoniou, University of Patras - Patras, Greece; Joerg Buchholz, University of Western Sydney - Penrith South DC, New South Wales, Australia
A novel signal-dependent approach is followed here for modeling perceived distortions due to reverberation in audio signals. The method describes perceived monaural time-frequency and level distortions due to reverberation, which depend on the reproduced signal’s evolution. A Computational Auditory Masking Model (CAMM) is employed, using as inputs the reverberant and reference (anechoic) signal, generating time-frequency maps of perceived distortions. From these maps and in a number of sub-bands, suitable functions can be derived allowing suppression of reverberation in the processed signal.
Convention Paper 6702 (Purchase now)

P9-6 Investigating Spatial Audio Coding Cues for Meeting Audio SegmentationEva Cheng, Ian Burnett, Christian Ritz, University of Wollongong - Wollongong, New South Wales, Australia
As multiparty meetings involve participants that are generally stationary when actively speaking, participant location information can be used to segment the recorded meeting audio into loudspeaker “turns.” In this paper speaker location information derived from spatial cues generated by spatial audio coding techniques is investigated. The validity of using spatial cues for meeting audio segmentation is explored through investigating multiple microphone meeting audio recording techniques and extracting and comparing spatial cues used by different spatial audio coders. Experimental results show the statistical relationship between loudspeaker location and interchannel level and phase-based spatial cues strongly depends on the microphone pattern. Results also indicate that interchannel correlation-based spatial cues represent location information that is ambiguous for meeting audio segmentation.
Convention Paper 6703 (Purchase now)

P9-7 The Effect of Audio Compression Techniques on Binaural Audio RenderingFabien Prezat, Brian Katz, Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI-CNRS) - Orsay, France
The use of lossy audio compression is becoming increasingly common. Many studies have concentrated on the audio quality of such compression techniques, predominantly in a monaural context. This paper investigates the effects of audio compression techniques on spatialized audio, specifically binaural audio. Various compression techniques (AAC, ATRAC, MP2, and MP3) using various bit rates when possible have been applied to several test signals. This paper presents numerical and perceptive comparisons of the variations in inter-aural time difference (ITD) due to audio compression techniques. Some investigations were also made concerning the effect on spectral peaks and notches, as these spectral cues (contained in the Head-Related Transfer Function, HRTF) are necessary for more precise localization including front-back discrimination and elevation.
Convention Paper 6704 (Purchase now)

P9-8 Sound Source Obstruction in an Interactive 3-Dimensional MPEG-4 EnvironmentBeatrix Steglich, Ulrich Reiter, Technische Universität Ilmenau - Ilmenau, Germany
This paper describes the continuation of research concerning sound source obstruction in virtual scenes. An algorithm for the determination of sound source obstruction was implemented in the described 3-D MPEG-4 environment. With the help of the MPEG-4 Advanced AudioBIFS node AcousticMaterial acoustic properties are assigned to potential obstructors in a virtual scene. Various implementations of acoustic obstruction are explained. Furthermore, a bimodal subjective assessment was performed in order to identify the best implementation of obstruction. The results of the assessment are presented in-depth. Additionally we demonstrate a concept for a second intended bimodal assessment for the comparison of gain and frequency filtering and give an outlook for further research and development in the area of immersive acoustics.
Convention Paper 6705 (Purchase now)


   
  (C) 2006, Audio Engineering Society, Inc.