AES Conventions and Conferences

   Return to 111th
   Chairman's Welcome
   AES Statement
   Exhibitors
   Detailed Calendar
         (in Excel)
   Calendar (in PDF)
   Paper Sessions
   Workshops
   Special Events
   Technical Tours
   Student Program
   Heyser Lecture
   Tech Comm Mtgs
   Standards Comm Mtgs
   Hotel Information
   Registration
   Press Information

Session D Friday, November 30 2:00 pm-5:00 pm
SPATIAL AND MULTICHANNEL, PART 2
Chair: Nick Zacharov, Nokia Research Center, Tampere, Finland

2:00 pm

D-1 An Integrated Multidimensional Controller of Auditory Perspective in a Multichannel Sound Field

Jason Corey, Wieslaw Woszczyk, Geoff Martin and René Quesnel, McGill University, Montreal, Quebec, Canada

A system for control and synthesis of auditory perspective in a multichannel sound field is described in this paper. The system employs a sound field synthesis engine comprising several acoustic simulation devices working in parallel that are all controlled by one intuitive, programmable controller. The controller allows smooth, efficient, and dynamic modification of the spatial attributes of a multichannel sound field.

Convention Paper 5417

 

2:30 pm

D-2 Adaptive Synthesis of Immersive Audio Rendering Filters

Jong-Soong Lim and Chris Kyriakakis, University Southern California, Los Angeles, CA, USA

One of the key limitations in spatial audio rendering over loudspeakers is the degradation that occurs as the listener's head moves away from the intended sweet spot. This paper proposes a method for designing immersive audio rendering filters using adaptive synthesis methods that can update the filter coefficients in real time. These methods can be combined with a head tracking system to compensate for changes in the listener's head position. The rendering filter's weight vectors are synthesized in the frequency domain using magnitude and phase interpolation in frequency subbands.

Convention Paper 5422

 

3:00 pm

D-3 Unravelling the Perception of Spatial Sound Reproduction: Analysis and External Preference Mapping

Nick Zacharov and Kalle Koivuniemi, Nokia Research Center, Tampere, Finland

This paper presents the external preference mapping of the perception of spatial sound reproduction systems. Thirteen spatial sound samples and eight reproduction systems were subjectively tested in terms of preference and direct attribute ratings. The unravelling of this data to establish what perceptual attributes contribute to subjective preference is performed using multivariate calibration techniques. The results of which are presented and analyzed in detail. A predictive model of subjective preference has been developed and is presented for this class of spatial sound reproduction systems.

Convention Paper 5423

 

3:30 pm

D-4 Unravelling the Perception of Spatial Sound Reproduction: Language Development, Verbal Protocol Analysis, and Listener Training

Kalle Koivuniemi and Nick Zacharov, Nokia Research Center, Tampere, Finland

This paper presents the methods used in developing a descriptive language for a set of samples created for evaluating different spatial sound reproduction systems. The different methods of language development are discussed, and the language development process employed is explained in detail. The developed descriptive language is presented with the associated direct attribute scales. Lastly, the development of training samples is presented.

Convention Paper 5424

 

4:00 pm

D-5 The Active Listening Room Simulator: Part 2

Amber Naqvi and Francis Rumsey, University of Surrey, Guildford, Surrey, UK

This paper presents the results of computer simulation of active reflectors in a reference listening room which are used to create artificial reflections in a two loudspeaker, stereo listening configuration. This formulates the second phase of experiments in the active listening room project involving the analysis of computer modeling results and loudspeaker selection based on free-field response. The aim of this project is to create a truly variable listening condition in a reference listening room by means of active simulation of key acoustic parameters, such as the early reflection pattern, early decay time, and reverberation time.

Convention Paper 5425

 

4:30 pm

D-6 Ambiophonics: Achieving Physiological Realism in Music Recording and Reproduction

Ralph Glasgal, Ambiophonics Institute, Rockleigh, NJ, USA

Ambiophonics is the logical successor to stereophonics, 5.1, 6,0, 7,1, 10.2, or Ambisonics in the periphonic recording and reproduction of frontally staged music or drama. The paper shows how only two recording media channels, driving a multiloudspeaker surround Ambiophonic system, can consistently and optimally generate a 'you are there' sound field that the domestic concert hall listener can sense has normal binaural physiological verisimilitude. Ambiophonics can deliver such realism even from standard two media channel recordings, such as the existing library of LPs, CDs, DVDs, or SACDs or via super-wide-stage recordings made using an Ambiophone.

Convention Paper 5426

Back to AES 111th Convention Back to AES Home Page


(C) 2001, Audio Engineering Society, Inc.