AES Conventions and Conferences

   Return to 116th
   Registration
   Exhibitors
   Detailed Calendar
         (in Excel)
   Calendar (in PDF)
   Preliminary Program
   4 Day Planner PDF
   Convention Program
         (in PDF)
   Exhibitor Seminars
         (in PDF)
   Multichannel
         Symposium
   Paper Sessions
   Tutorial Seminars
   Workshops
   Special Events
   Exhibitor Seminars
   Tours
   Student Program
   Historical
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Hotel Information
   Travel Info
   Press Information

v3.1, 20040329, ME

Session H Sunday, May 9 13:00 h–17:00 h
MULTICHANNEL SOUND
Chair: Geoff Martin, Bang & Olufsen A/S, Struer, Denmark

H-1 Multiactuator Panel (MAP) Loudspeakers: How to Compensate for Their Mutual ReflectionsRik van Zon1, Etienne Corteel2, Diemer de Vries1, Olivier Warusfel2
1
Technical University of Delft, Delft, The Netherlands
2
IRCAM, Paris, France
Wave Field Synthesis (WFS) allows reproduction of spatial and temporal properties of a target sound field over a large listening area. Thanks to their screen shape, Multi-Actuator Panels (MAP) represent a good alternative for WFS reproduction in multimedia installations. However, MAP loudspeakers act as reflectors for acoustic waves that disturb the perception of the target sound field. A general listening room compensation technique is proposed, based on multichannel inversion, that allows attenuating early reflections caused by a reflector using loudspeakers integrated into this reflector (e.g., MAP loudspeakers). After an analysis of the geometrical arrangement of the panels, the method processes separately the free field equalization of the loudspeaker array and the reflection compensation. Simulation and measurements show that the attenuation is effective over the entire listening area.
H-2 Advanced Multichannel Audio Systems with Better Impression of Presence and Reality—Kimio Hamasaki, Koichiro Hiyama, Toshiyuki Nishiguchi, Kazuho Ono, NHK, Science & Technical Research Laboratories, Tokyo, Japan
Various sound systems have been studied in NHK with the objective of developing the next-generation broadcasting system. This paper introduces the ultimate 22.2 multichannel audio system for ultrahigh definition video with 4000 scanning lines, and an advanced multichannel sound system with frontal loudspeakers placed in several rows for reproducing a live sound field. The former system has 3 vertical layers of loudspeakers with 2 LEFs. The latter system consists of frontal loudspeaker-ranks and rear loudspeaker-arrays for reproducing a natural impression of depth and ambience. This paper describes the principal advantages of the newly proposed multichannel audio system over ordinary multichannel sound systems such as 5.1.
H-3 Visualizing Spatial Sound Imagery of Multichannel AudioJohn Usher, Wieslaw Woszczyk, McGill University, Montreal, Quebec, Canada
To describe a multichannel audio experience in terms of its spatial features requires us to consider separately how we hear both the direct and indirect sound. We have developed and tested a Graphical User Interface (GUI) to allow a listener to describe where they hear both of these acoustic parts in an audio scene. The GUI has previously been used as a tool for describing where we hear the direct sound in an audio sound field, and we now extend the experimental paradigm to measure where we hear the indirect sound. We map the spatial extent of the reflected sound and describe a category system for describing a spatial sound attribute called “definition.” We tested the GUI using 5 loudspeakers arranged according to BS-775 to replay “live” multichannel sound recordings of three different musical pieces (of which two were duets and one solo). Graduate Tonmeister students used the GUI to describe these sound scenes, and a variety of statistical analyses are presented which show how data from the GUI can be used to represent perceived spatial sound imagery.
H-4 Wave Field Synthesis in the Real World: Part 2—In the Movie TheaterThomas Sporer, Beate Klehs, Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
In anechoic rooms the concept of Wave Field Synthesis (WFS) has already proven to provide superior spatial sound over a large part of the room. In anechoic space WFS needs a huge number of loudspeakers. In “normal” listening conditions simulated and real acoustics interfere with each other making the generated wave field less exact. This paper describes listening tests conducted to evaluate WFS in a movie theater with about 100 seats. Parameters being tested are the number of loudspeakers, the distance between loudspeakers, the position of the simulated source, and the position of listeners relative to the loudspeakers. In an additional test the audio-visual coherence has been investigated.
H-5 Wave Field Synthesis 3-D Simulator Based on Finite-Difference Time-Domain Method—Jose Escolano1, Sergio Bleda1, Basilio Pueo1, José Javier López2
1
University of Alicante, Alicante, Spain
2
Technical University of Valencia, Valencia, Spain
The Finite-Difference Time-Domain (FDTD) method was successfully developed to model electromagnetic systems. This technique has been also used in several disciplines such as optics and acoustics. A new approach for Wave Field Synthesis (WFS) simulation using FDTD instead of the finite difference classic method is presented. This software allows the precise evaluation and behavior monitoring of different WFS configurations in time domain and thus in a particular frequency band. Moreover, simulations can be analyzed inside a room or in a free space.
H-6 Reproduction of Reverberation with Spatial Impulse Response Rendering—Ville Pulkki1, Juha Merimaa1,2, Tapio Lokki1
1
Helsinki University of Technology, Espoo, Finland
2
Ruhr-Universitaet Bochum, Bochum, Germany
A technique for spatial reproduction of room acoustics, Spatial Impulse Response Rendering (SIRR), has been recently proposed. In the method, a multichannel impulse response of a room is measured, and responses for loudspeakers in an arbitrary multichannel listening setup are computed. When the responses are loaded to a convolving reverberator, they will create a perception of space corresponding to the measured room. The method is based on measuring with a sound field microphone or a comparable system, and on analyzing direction-of-arrival and diffuseness at frequency bands. An omnidirectional response is then positioned to a loudspeaker system according to analyzed directions and diffuseness. In this paper the SIRR method is reviewed and refined. The reproduction quality of SIRR and some other systems is evaluated with listening tests, and it is found that SIRR yields a natural spatial reproduction of the acoustics of a measured room.
H-7 New and Advanced Features for Audio Rendering in the MPEG-4 Standard—Jürgen Schmidt, Ernst F. Schröder, Thomson Corporate Research, Hannover, Germany
Since the early days of audio stereophony, we tend to think of audio transmission and audio presentation in terms of loudspeaker feeds or “channels.” This seemed to be appropriate for as few channels as two and still reasonable for five, but is rapidly losing its meaning with the advent of technologies like, e.g., wave field synthesis. A key part of MPEG-4 is the introduction of object-oriented thinking for the description, generation, transport, and rendering of audio scenes. Binary Information for Scenes (BIFS) is that part of the MPEG-4 standard that enables transmission of scene descriptions together with the audio signals to facilitate the final rendering. The latest version of BIFS (Version 3) now has a number of improvements and new concepts including: presentations of sound fields (inclusion of Ambisonics and Wave Field Synthesis); presentation of “shaped sounds”; and the possibility of combining 3-D audio with 2-D video. The concepts and achievements by MPEG with audio BIFS V3 will be explained in detail.
H-8 The Quick Reference Guide to Multichannel Microphone Arrays Design Part II: Using Supercardioid and Hypocardioid Microphones—Michael Williams1, Guillaume Le Du2
1
Sounds of Scotland, Le Perreux sur Marne, France
2
Radio France, Paris, France
This paper is the second part of a paper presented at the 110th AES Convention in Amsterdam. A selection of different multichannel microphone arrays is again presented but this time using Supercardioid and Hypocardioid microphones. Five-channel array configurations are described with respect to their particular characteristic: microphone directivity, specific segment coverage, segment offset values where necessary, microphone coordinates and orientations. Arrays have been chosen so as to assist the sound engineer in the search for the optimum microphone array for a given recording situation.


Back to AES 116th Convention Back to AES Home Page


(C) 2004, Audio Engineering Society, Inc.