AES San Francisco 2012
Paper Session P14

P14 - Spatial Audio Over Headphones

Sunday, October 28, 9:00 am — 11:30 am (Room 121)

David McGrath, Dolby Australia - McMahons Point, NSW, Australia

P14-1 Preferred Spatial Post-Processing of Popular Stereophonic Music for Headphone ReproductionElla Manor, The University of Sydney - Sydney, NSW, Australia; William L. Martens, University of Sydney - Sydney, NSW, Australia; Densil A. Cabrera, University of Sydney - Sydney, NSW, Australia
The spatial imagery experienced when listening to conventional stereophonic music via headphones is considerably different from that experienced in loudspeaker reproduction. While the difference might be reduced when stereophonic program material is spatially processed in order to simulate loudspeaker crosstalk for headphone reproduction, previous listening tests have shown that such processing typically produces results that are not preferred by listeners in comparisons with the original (unprocessed) version of a music program. In this study a double blind test was conducted in which listeners compared five versions of eight programs from a variety of music genres and gave both preference ratings and ensemble stage width (ESW) ratings. Out of four alternative postprocessing algorithms, the outputs that were most preferred resulted from a nearfield crosstalk simulation mimicking low-frequency interaural level differences typical for close-range sources.
Convention Paper 8779 (Purchase now)

P14-2 Interactive 3-D Audio: Enhancing Awareness of Details in Immersive Soundscapes?Mikkel Schmidt, Technical University of Denmark - Kgs. Lyngby, Denmark; Stephen Schwartz, SoundTales - Helsingør, Denmark; Jan Larsen, Technical University of Denmark - Kgs. Lyngby, Denmark
Spatial audio and the possibility of interacting with the audio environment is thought to increase listeners' attention to details in a soundscape. This work examines if interactive 3-D audio enhances listeners' ability to recall details in a soundscape. Nine different soundscapes were constructed and presented in either mono, stereo, 3-D, or interactive 3-D, and performance was evaluated by asking factual questions about details in the audio. Results show that spatial cues can increase attention to background sounds while reducing attention to narrated text, indicating that spatial audio can be constructed to guide listeners' attention.
Convention Paper 8780 (Purchase now)

P14-3 Simulating Autophony with Auralized Oral-Binaural Room Impulse ResponsesManuj Yadav, University of Sydney - Sydney, NSW, Australia; Luis Miranda, University of Sydney - Sydney, NSW, Australia; Densil A. Cabrera, University of Sydney - Sydney, NSW, Australia; William L. Martens, University of Sydney - Sydney, NSW, Australia
This paper presents a method for simulating the sound that one hears from one’s own voice in a room acoustic environment. Impulse responses from the mouth to the two ears of the same head are auralized within a computer-modeled room in ODEON; using higher-order ambisonics for modeling the directivity pattern of an anthropomorphic head and torso. These binaural room impulse responses, which can be measured for all possible head movements, are input into a mixed-reality room acoustic simulation system for talking-listeners. With the system, “presence” in a room environment different from the one in which one is physically present is created in real-time for voice related tasks.
Convention Paper 8781 (Purchase now)

P14-4 Head-Tracking Techniques for Virtual Acoustics ApplicationsWolfgang Hess, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Synthesis of auditory virtual scenes often requires the use of a head-tracker. Virtual sound fields benefit from continuous adaptation to a listener’s position while presented through headphones or loudspeakers. For this task position- and time-accurate, continuous robust capturing of the position of the listener’s outer ears is necessary. Current head-tracker technologies allow solving this task by cheap and reliable electronic techniques. Environmental conditions have to be considered to find an optimal tracking solution for each surrounding and for each field of application. A categorization of head-tracking systems is presented. Inside-out describes tracking stationary sensors from inside a scene, whereas outside-in is the term for capturing from outside a scene. Marker-based and marker-less approaches are described and evaluated by means of commercially available products, e.g., the MS Kinect, and proprietary developed systems.
Convention Paper 8782 (Purchase now)

P14-5 Scalable Binaural Synthesis on Mobile DevicesChristian Sander, University of Applied Sciences Düsseldorf - Düsseldorf, Germany; Robert Schumann Hochschule Düsseldorf - Düsseldorf, Germany; Frank Wefers, RWTH Aachen University - Aachen, Germany; Dieter Leckschat, University of Applied Science Düsseldorf - Düsseldorf, Germany
The binaural reproduction of sound sources through headphones in mobile applications is becoming a promising opportunity to create an immersive three-dimensional listening experience without the need for extensive equipment. Many ideas for outstanding applications in teleconferencing, multichannel rendering for headphones, gaming, or auditory interfaces implementing binaural audio have been proposed. However, the diversity of applications calls for scalability of quality and performance costs so as to use and share hardware resources economically. For this approach, scalable real-time binaural synthesis on mobile platforms was developed and implemented in a test application in order to evaluate what current mobile devices are capable of in terms of binaural technology, both qualitatively and quantitatively. In addition, the audio part of three application scenarios was simulated.
Convention Paper 8783 (Purchase now)

Return to Paper Sessions

EXHIBITION HOURS October 27th 10am – 6pm October 28th 10am – 6pm October 29th 10am – 4pm
REGISTRATION DESK October 25th 3pm – 7pm October 26th 8am – 6pm October 27th 8am – 6pm October 28th 8am – 6pm October 29th 8am – 4pm
TECHNICAL PROGRAM October 26th 9am – 7pm October 27th 9am – 7pm October 28th 9am – 7pm October 29th 9am – 5pm
AES - Audio Engineering Society