AES Conventions and Conferences

Return to 117th
Convention Program
       (in PDF)
Travel & Hotels
Technical Program
   Detailed Calendar
       & 4 Day Planner
   Paper Sessions
   Special Events
   Technical Tours
   Exhibitor Seminars
   Historical Events
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Facilities Requests
Student Program
Press Information

v7.0, 20040922, me

Sunday, October 31, 1:30 pm – 3:30 pm

Chair: Durand Begault,
NASA Ames Research Center, Mountain View, CA, USA

1:30 pm
Telescopic Spatial RadioNorman Jouppi, Subu Iyer, Hewlett Packard, Palo Alto, CA, USA
We have developed a system we call Telescopic Spatial Radio (TSR). This system transforms monaural transmissions from geographically distributed speakers into a spatial audio presentation using binaural techniques, which preserve the actual physical angles between participants. TSR instantly augments the user’s situational awareness with the headings of the speaking users. The system leverages orientation measuring, location tracking, and signal processing capabilities that are rapidly decreasing in cost. TSR has many potential applications ranging from emergency and aviation communication to a richer consumer experience. We have developed a prototype system using laptop computers, GPS, and electronic compasses. The system allows users to select HRTFs from a library and operates over a computer network.
Convention Paper 6314

2:00 pm
Dynamic Cross-Talk Cancellation for Binaural Synthesis in Virtual Reality EnvironmentsTobias Lentz, Gottfried Behler, Aachen University, Aachen, Germany
To create a virtual reality environment with true immersion a precise spatial audio reproduction system is required. Since the placement of large loudspeaker arrays, which are needed for wave field synthesis systems, may be impossible for some environments, alternative solutions must be found. One application of this kind is a multi-screen VR system where the stereoscopic video images envelope the user. In such a case the presented binaural approach has many advantages. This paper describes the virtual sound source imaging by binaural synthesis and the reproduction over loudspeakers with a dynamic (tracked) cross-talk cancellation system that only needs three to four loudspeakers to cover all listening positions.
Convention Paper 6315

2:30 pm
Dancing Music: Integrated MIDI-Driven Synthesis and Spatialization for Virtual RealityMasahiro Sasaki, Michael Cohen, University of Aizu, Aizu-Wakamatsu, Fukushima-ken, Japan
We describe a unique MIDI-based spatial sound system featuring a network-driven bank of four RSS-10s (Roland Sound Space Processors) driving a eight-transducer circumferential speaker array in a “3-D Theater,” enabling a three-dimensional dynamic musical space. Sound sources can be choreographed by adding dynamic positional gestures to standard MIDI files. Our sequencing system interprets such files, partitioning their data into two streams: one for MIDI tonal events, sent to synthesizers, and the other for positional data, sent simultaneously to sound spatializers, clients in a multimodal heterogeneous collaborative virtual environment. This research achieves programmed and interactive musical control of sound spatialization, synchronizable with stereographic 3-D contents, for spatializing music to give real presence to an audience.
Convention Paper 6316

3:00 pm
Integration of Measurements of Interaural Cross-Correlation Coefficient and Interaural Time Difference within a Single Model of Perceived Source WidthRussell Mason, Tim Brookes, Francis Rumsey, University of Surrey, Guildford, Surrey
A measurement model based on the interaural cross-correlation coefficient (IACC) that attempts to predict the perceived source width of a range of auditory stimuli is currently under development. It is necessary to combine the predictions of this model with measurements of interaural time difference (ITD) to allow the model to provide its output on a meaningful scale and to allow integration of results across frequency. A detailed subjective experiment was undertaken using narrow-band stimuli with a number of center frequencies, IACCs and ITDs. Subjects were asked to indicate the perceived position of the left and right boundaries of a number of these stimuli by altering the ITD of a pair of white noise comparison stimuli. It is shown that an existing IACC-based model provides a poor prediction of the subjective results but that modifications to the model significantly increase its accuracy.
Convention Paper 6317

Back to AES 117th Convention Back to AES Home Page

(C) 2004, Audio Engineering Society, Inc.