AES Conventions and Conferences

   Registration
   Exhibitors
   Detailed Calendar
         (in Excel)
   Calendar (in PDF)
   Convention Planner
   Surround - Live:
         Symposium
   Paper Sessions
   Tutorial Seminars
   Workshops
   Special Events
   Exhibitor Seminars
         Room A
   Exhibitor Seminars
         Room B
   Technical Tours
   Student Program
   Historical
   Free Hearing
         Screenings
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Press Information
   Return to 115th

Sunday, October 12 9:00 am – 12:00 noon
Session J Multichannel Audio

J-1 An Approach to Miking and Mixing of Music Ensembles Using Wave Field SynthesisClemens Kuhn, Duesseldorf Conservatory of Music and University of Applied Sciences, Duesseldorf, Germany; Renato Pellegrini, sonicEmotion AG, Zurich, Switzerland; Dieter Leckschat, Duesseldorf University of Applied Sciences, Düsseldorf, Germany; Etienne Corteel, IRCAM, Paris, France
The reproduction of sound using wave field synthesis (WFS) provides larger possibilities in rendering sonic space compared to standard 5.1 set-ups (panorama, acoustic holography, envelopment, etc.). Different microphone set-ups have been developed for this reproduction system, multimicrophone set-ups as well as microphone arrays. Using multimicrophone techniques, the aesthetic of the sonic image is limited with regard to the localization of sound sources, spectrum, and blending especially in so called “classical” music recordings. The sources appear rather focused and dominant due to the position of the microphones capturing the sound in the near field. Reproduction as point sources and convolution with the room impulse responses lead to a correct room reproduction in theory, but in practice the spectrum of the instruments and the impression of spatial depth require improvement. Microphone arrays are suitable for impulse response measurements but not flexible enough for direct music recording. The authors propose an approach to miking and mixing of music recordings, combining WFS techniques with phantom sources from a main microphone. It adapts this stereophonic technique to the holographic properties of WFS. This approach has been evaluated in an interactive mixing and listening test session where a panel of sound engineers was invited to perform the mix of an orchestral recording. Several mixing tasks were specified (stable localization, blending, homogeneity, envelopment). The results of this test permit analysis of the aesthetic advantages and also the limits of the proposed mixing approach.

J-2 Investigation of Interactions between Recording/Mixing Parameters and Spatial Subjective Attributes in the Frame of 5.1 MultichannelMagali Deschamps, Conservatoire National Supérieur de Musique de Paris, Paris, France; Olivier Warusfel, Alexis Baskind, IRCAM, Paris, France
Subjective listening tests dedicated to 5.1 multichannel were conducted using various recording and mixing configurations. Two ambience microphone arrays (Hamasaki squares), differing in size, were used to record the hall reverberation, in addition to direct sound microphones, providing a separation between direct and reverberant sound. Differences between the reverberation recording systems, in order to study its optimization regarding size, were evaluated using a set of spatial subjective attributes. Post-processing parameters (time delay between direct sound and reverberation, front/back distribution of reverberation) were investigated along similar attributes. Results underlined significant differences between the two Hamasaki square systems. The time delay parameter showed low influence on listener envelopment and apparent source width, whereas front/back distribution of reverberation showed a significant effect on these attributes.

J-3 Some Considerations for High-Resolution Audio—Wieslaw Woszczyk, McGill University, Montreal, Quebec, Canada
It has been frequently advocated that high-resolution audio means an ultra-wide frequency range and that, given the limited sensitivity of human hearing for high frequencies, little is gained from high-resolution perceptually. Not much laboratory evidence is found to counter this assertion because psychoacoustic research has restricted itself largely to studying the effects of frequency range within 20 Hz to 20 kHz band rather than outside of it. This paper reviews remarkable complexities of auditory signals and looks at precise distinctions an auditory system has to extract when analyzing time/space attributes of auditory scenes. It is shown that high-resolution in temporal, spatial, spectral, and dynamic domains together determine the quality value of perceived music and sound, and that temporal resolution may be perceptually most important.

J-4 Virtual Acoustic System with a Multichannel HeadphoneIngyu Chun, Philip Nelson, University of Southampton, Southampton, UK
The performance of current virtual acoustic systems is highly sensitive to the geometry of the individual ear at high frequencies. The objective of this paper is to study a virtual acoustic system which may be not sensitive to individual ear shape. The incident sound field around the ear is reproduced by using a multichannel headphone. The results of computer simulations show that the desired sound pressure at the eardrum can be successfully replicated in a virtual acoustic environment by using a multichannel headphone.

J-5 Perceptually Motivated Processing for Spatial Audio Microphone ArraysChristoph Reller, Malcolm O. J. Hawksford, University of Essex, Essex, UK
A preliminary study is presented that investigates processing of microphone array signals for multichannel recording applications. Generalized, perceptually and acoustically based approaches are taken, where a spaced array of M microphones is mapped to an array of L loudspeakers. The principal objective is to establish transformation matrices that integrate both microphone and loudspeaker array geometry in order to reproduce a subjectively accurate illusion of the original sound field. Techniques of acoustical vector synthesis and plane wave reconstruction are incorporated at low frequencies migrating to an approach based upon head-related transfer functions (HRTFs) at higher frequencies. Error surfaces based on the HRTF reconstruction error are used to assess perceptually relevant solutions. Simulation results presented in a five-channel format are calculated for processed audio material with and without acoustical boundary reflections.

J-6 Scalable Tri-Play Recording for Stereo, ITU 5.1/6.1 2-D, and Periphonic 3-D (with Height) Compatible Surround Sound ReproductionRobert (Robin) Miller III, Filmaker Technology, Bethlehem, PA, USA
Objectives: Take the next step toward reproducing human hearing and make better recordings in 5.1. In life, we hear sources we see—but also reflections and reverberation we do not see. Each sonic arrival is individually tonally colored by our unique HRTF, including height, colored by our pinna. Preserving 3-D directionality is key to life-like hearing. A practical, scalable approach is presented (patent pending)—a way to “transform” 3-D (full sphere) recordings for uncompromised 2-D reproduction in stereo or 5.1/6.1 without any decoding. By adding a decoder and speakers, full 3-D is losslessly “reconstituted” from six-channel media. Experimental “tri-play” six-channel “PerAmbio 3D/2D” recordings have been made and demonstrations presented (AES 24th, Banff, 6/2003) with praised results.

Back to AES 115th Convention Back to AES Home Page


(C) 2003, Audio Engineering Society, Inc.