120th AES Convention - Paris, France - Dates: Saturday May 20 - Tuesday May 23, 2006 - Porte de Versailles

4 Day Planner
Paper Sessions
Exhibitor Seminars
Application Seminars
Student Program
Special Events
Technical Tours
Heyser Lecture
Tech Comm Mtgs
Standards Mtgs
Hotel Reservation

AES Paris 2006

Home | Technical Program | Exhibition | Visitors | Students | Press

Last Updated: 20060508, mei

P22 - Design and Engineering of Auditory Displays

Monday, May 22, 16:00 — 18:20

Chair: Densil Cabrera, University of Sydney - Syndey, New South Wales, Australia

William Martens, McGill University - Montreal, Quebec, Canada

P22-1 Spatial Sound in Auditory Vision Substitution SystemsAleksander Väljamäe, Mendel Kleiner, Chalmers University of Technology - Göteborg, Sweden
Current auditory vision sensory substitution (AVSS) systems might be improved by the direct mapping of an image into a matrix of concurrently active sound sources in a virtual acoustic space. This mapping might be similar to the existing techniques for tactile substitution of vision where point arrays are successfully used. This paper gives an overview of the current auditory displays used to sonify 2-D visual information and discuss the feasibility of new perceptually motivated AVSS methods encompassing spatial sound.

Presentation is scheduled to begin at 16:00
Convention Paper 6795 (Purchase now)

P22-2 Acoustic Rendering for Color InformationLudovico Ausiello, Emanuele Cecchetelli, Massimo Ferri, Nicoletta Caramelli, University of Bologna - Bologna, Italy
The Espacio Acustico Virtual (EAV) is a portable device that acoustically represents visual environmental scenes by rendering objects with the sound of virtual rain drops. Here, an improvement of this device is presented, which adds color to the information conveyed. Two different mappings of color into sound were implemented. Georama is a geometric coding based on red green blue vectors, while Colorama is an associative coding based on the hue and saturation model of color space. An experiment was run on both sighted and blind participants in order to assess which of these coding is the most user-friendly. The results showed that participants learned to discriminate colors through sounds better when trained with Georama than with Colorama.

[Associated Poster Presentation in Session P27, Tuesday, May 23, at 11:00]

Presentation is scheduled to begin at 16:20
Convention Paper 6796 (Purchase now)

P22-3 Auditory Display of AudioDensil Cabrera, Sam Ferguson, University of Sydney - Sydney, New South Wales, Australia
In this paper we consider applications of auditory display for representing audio systems and audio signal characteristics. Conventional analytic representation of system characteristics, such as impulse response or nonlinear distortion, rely on numeric and graphic communication. Alternatively, simply listening to the system under test should reveal important aspects of its performance. Given that auditioning systems is so effective, it seems useful to develop higher-level auditory representations (auditory displays) of system performance parameters to exploit these listening abilities. For this purpose, we consider ways in which audio signals can be further transformed for auditory display, beyond the simple act of playing the sound.

[Associated Poster Presentation in Session P27, Tuesday, May 23, at 11:00]

Presentation is scheduled to begin at 16:40
Convention Paper 6797 (Purchase now)

P22-4 Nonvocal Auditory Signals in the Operating Room for Each Phase of the Anesthesia ProcedureAnne Guillaume, Léonore Bourgeon, Elisa Jacob, Marie Rivenez, Claude Valot, IMASSA - Brétigny sur Orge, France; Jean-Bernard Cazalà, Hôpital Necker - Paris, France
Auditory warning signals are considered by the anesthetist team as a major source of annoyance and confusion in the operating room. An ergonomic approach was carried out in order to propose a functional classification of the auditory alarms and to allocate a correct level of urgency to them. It allows the team to analyze the pertinence of the auditory warning signals emitted during anesthesia progress taking into account each phase of the anaesthesia procedure. The results showed that the design of auditory warning signals could be improved taking into account the activity of the anesthetist team. They also showed significantly higher frequencies of warning signals during induction and emergence phases. However, the alarms were often ignored during these two phases as they occurred as a result of deliberate anesthetist actions. Most of them were then considered as nuisance alarms.

[Associated Poster Presentation in Session P27, Tuesday, May 23, at 11:00]
Convention Paper 6798 (Purchase now)

P22-5 Frequency Bandwidth and Multitalker EnvironmentsSimon Carlile, David Schonstein, University of Sydney - Sydney, New South Wales, Australia
Understanding a talker of interest from a complex background is a common and difficult listening task not just restricted to cocktail parties. Recent work demonstrates that high frequencies in speech are important for accurately localizing the talker and that perceived differences in the locations of talkers are important in solving the cocktail party problem. This paper describes experiments demonstrating that high frequencies contribute to the spatial release from masking by other talkers. In addition, low frequency energy at the fundamental frequency of the talker, over and above the perception of the fundamental frequency, also plays a role in spatial release from masking.

Presentation is scheduled to begin at 17:20
Convention Paper 6799 (Purchase now)

P22-6 Usability of 3-D Sound for Navigation in a Constrained Virtual EnvironmentAntoine Gonot, France Telecom R&D - Lannion, France, CNAM, Paris, France; Noël Château, Marc Emerit, France Telecom R&D - Lannion, France
This paper presents a study on a global evaluation of spatial auditory displays in a constrained virtual environment. Forty subjects had to find nine sound sources in a virtual town, navigating by using spatialized auditory cues that were delivered differently in four different conditions: by a binaural versus a stereophonic rendering (through headphones) combined by a contextualized versus decontextualized presentation of information. Behavioral data, auto-evaluation of cognitive load and subjective-impression data collected via a questionnaire were recorded. The analysis shows that the binaural-contextualized presentation of auditory cues leads to the best results in terms of usability, cognitive load, and subjective evaluation. However, these advantages are only observable after a certain period of acquisition.

[Associated Poster Presentation in Session P27, Tuesday, May 23, at 11:00]

Presentation is scheduled to begin at17:40
Convention Paper 6800 (Purchase now)

P22-7 Psychoacoustic Evaluation of a New Method for Simulating Near-Field Virtual Auditory SpaceAlan Kan, Craig Jin, André van Schaik, University of Sydney - Sydney, New South Wales, Australia
A new method for generating near-field virtual auditory space (VAS) is presented. This method synthesizes near-field head-related transfer functions (HRTFs) based on a distance variation function (DVF). Using a sound localization experiment, the fidelity of the near-field VAS generated using this technique is compared to that obtained using near-field HRTFs synthesized using a multipole expansion of a set of HRTFs interpolated using a spherical thin-plate spline. Individualized HRTFs for varying distances in the near-field were synthesized using the subjects’ HRTFs measured at a radius of 1-m for a limited number of locations around the listener’s head. Both methods yielded similar localization performance showing no major directional localization errors and reasonable correlation between perceived and target distances of sounds up to 50 cm from the center of the subjects head. Also, subjects tended to overestimate the target distance for both methods.

[Associated Poster Presentation in Session P27, Tuesday, May 23, at 11:00]

Presentation is scheduled to begin at 18:00
Convention Paper 6801 (Purchase now)

  (C) 2006, Audio Engineering Society, Inc.