AES Vienna 2007
Home Visitors Exhibitors Press Students Authors
Technical Program
Paper Sessions

Spotlight on Broadcasting

Spotlight on Live Sound

Spotlight on Archiving

Detailed Calendar

Convention Planner

Paper Sessions



Exhibitor Seminars

Application Seminars

Special Events

Student Program

Technical Tours

Technical Council

Standards Committee

Heyser Lecture

Last Updated: 20070320, mei

P13 - Multichannel Sound

Sunday, May 6, 16:30 — 18:00

P13-1 Headphones Technology for Surround Sound Monitoring—A Virtual 5.1 Listening RoomRenato Pellegrini, Clemens Kuhn, sonic emotion ag - Obergltt (Zurich), Switzerland; Mario Gebhardt, Beyerdynamic GmbH & Co. KG - Heilbronn, Germany
This paper presents a headphone technology for professional surround monitoring with virtual 5.1 reproduction. Using perceptually motivated binaural signal processing and ultra sonic head tracking, this system enables the simulation of a loudspeaker set-up with correct localization and room impression. As a professional recording and mixing tool it provides the advantages of a portable headphone solution but avoids the known drawbacks such as inside-head localization, limited room perception, and turning of the sonic image with the listener’s head. The combination of three technologies—binaural reproduction, room simulation, and head tracking—enables the reproduction of a virtual reference listening room for applications in studios, recording trucks, and mobile recording set-ups.
Convention Paper 7068 (Purchase now)

P13-2 Hybrid Sound Field Processing for Wave Field Synthesis SystemHyunjoo Chung, Hwan Shim, Seoul National University - Seoul, Korea; JunSeok Lim, Sejong University - Seoul, Korea; Jae Hyoun Yoo, Electronics and Telecommunications Research Institute (ETRI) - Yusung-gu Daejeon, Korea; Koeng-Mo Sung, Seoul National University - Seoul, Korea
Using the wave field synthesis (WFS) method, the sound of a primary source was reproduced by plane waves. Although having some shortcomings, such as spatial aliasing, these plane waves enlarged the sweet spot of the listening area and decreased the localization error of the sound source. Also, we suggested a grouped reflections algorithm (GRA) for reproducing early reflections. This sequence of early reflections increased the spaciousness of the listening room environment. The result, obtained by applying this method, was implemented by linear arrays of 32 loudspeakers constructed in an anechoic room. For backward compatibility with standard five-channel surround titles, a new hybrid sound field processing algorithm using WFS and GRA method was implemented.
Convention Paper 7069 (Purchase now)

P13-3 Reproduction of Virtual Reality with Multichannel Microphone TechniquesTimo Hiekkanen, Tero Lempiäinen, Martti Mattila, Ville Veijanen, Ville Pulkki, Helsinki University of Technology - Espoo, Finland
The perceptual differences between virtual reality and its reproduction with different simulated multichannel microphone techniques were measured using listening tests. The virtual reality was generated using the image-source method and 16 loudspeakers in a 3-D arrangement in an anechoic chamber. Two spaced and two coincident microphone techniques were tested, namely Fukada tree, Decca tree, 1st order Ambisonics, and 2nd order Ambisonics. The spaced techniques utilized the 5.0 setup, and Ambisonics techniques utilized the quadraphonic setup. The perceptual difference was measured with ITU impairment scale.
Convention Paper 7070 (Purchase now)