Return to Paper Sessions  
AES Barcelona 2005
Paper Session C - Spatial Perception and Processing, Part 2

Last Updated: 20050331, mei

Saturday, May 28, 14:00 — 17:00

Chair: Durand Begault, NASA Ames Research Center - Mountain View, CA, USA

C-1 Using Perceptive Subbands Analysis to Perform Audio Scenes CartographyLaurent Millot, Gérard Pelé, Mohammed Elliq, ENS Louis Lumiere - Noisy-Le-Grand, France
Audio scene cartography for real or simulated stereo recordings is presented. This audio scene analysis is performed doing successively: a perceptive 10-subbands analysis; calculation of temporal laws for relative delays and gains between both channels of each subband using a short-time constant scene assumption and channels inter-correlation that permit to follow a mobile source in its moves; and calculation of global and subbands histograms whose peaks give the incidence information for fixed sources. Audio scenes composed of two to four fixed sources or with a fixed source and a mobile one have been already successfully tested. Further extensions and applications will be discussed. Audio illustrations of audio scenes, subband analysis and demonstration of real-time stereo recording simulations will be given.
Convention Paper 6340 (Purchase now)

C-2 Descriptor-Based SpatializationJérôme Monceaux, Arkamys - Paris, France; Francois Pachet, Sony CSL - Paris, France; Frédéric Amadu, Arkamys - Paris, France; Pierre Roy, Aymeric Zils, Sony CSL - Paris, France
The translation of monophonic soundtracks to new audio formats is the object of a growing demand particularly from the DVD producers. However operations like "upmixing" a monophonic track to a multichannel format are time-consuming tasks for the sound engineer, who has to choose, adapt, and tune different spatialization tools. In order to simplify the upmix, we introduce a new spatialization approach based on the use of perceptive descriptors. It consists in automatically detecting audio source characteristics, based on perceptive aspects and exploiting these characteristics to control spatialization tools. This paper presents an experimentation driven by Arkamys and Sony CSL Paris to evaluate the interest of this technique, to address the needs of sound engineers in the field of spatialization. EDS, a generic audio information extractor, was used to control the Arkamys spatializer. We describe the aim and interest of this descriptor-based approach and discuss its performances and limitations.
Convention Paper 6341 (Purchase now)

C-3 An Optimized Method for Capturing Multidimensional Acoustic FingerprintsRalph Kessler, Pinguin Research and Development - Hamburg, Germany
An optimized method for the creation and modeling of authentic "acoustic fingerprints" of an arbitrary acoustical space is presented. The methodology is intended primarily for experienced recording engineers who possess the necessary elements (a loudspeaker, typical sound recording equipment, and knowledge of microphone placement) for the discussed room sampling method. Listening tests performed by experienced Tonmeisters provided results which supported the extension of a well-known acoustical method to yield a new generation of simulation tools for use in music and film postproduction studios. New software and a "spider microphone" were developed to further facilitate the process. Underlying practical hints for room sampling, the presentation will include a "virtual acoustic tour" through famous German space in multichannel format, as well as a preview of the coming generation of devices for complex sound field simulation.
Convention Paper 6342 (Purchase now)

C-4 3DApe: A Real-Time 3-D Audio Playback EngineTeewoon Tan, Craig Jin, Alan Kan, Dennis Lin, Andre van Schaik, The University of Sydney - Sydney, Australia; Keir Smith, Matthew McGinity, University of New South Wales - Sydney, Australia
A new 3-D Audio Playback Engine (3DApe) and an associated 3-D Audio Soundtrack Producer (3DASP) were developed for an interactive, panoramic 3-D cinematic artwork, Conversations, which was exhibited at the Powerhouse Museum in Sydney, Australia (December, 2004). 3DASP is designed to produce spatial-audio sound tracks with no restrictions on the number of sound sources that can be rendered simultaneously with real-time head-tracking in virtual auditory space. 3DApe is an auditory user interface (AUI) which can simultaneously play back prerendered, spatial-audio sound tracks created by 3DASP, spatially render up to four instantaneous and simultaneous sound sources on command, and provide 3-D audio communications using voice over IP. 3Dape provides complete control of its functionality via command messages passed using socket communications. In the Conversations exhibit, 3DApe was controlled by the Virtools graphical software engine.
Convention Paper 6343 (Purchase now)

C-5 A Simple Methodology of Calibration for Sensor Arrays for Acoustical Radar SystemAlberto Izquierdo-Fuente, Juan-Jose Villacorta-Calvo, Lara Val-Puente, Alberto Martinez-Arribas, Maria-Isabel Jiménez-Gomez, University of Valladolid - Valladolid, Spain; Mariano Raboso-Mateos, UPSA - Salamanca, Spain
A simple "ad hoc" method is presented to allow calibration of arrays of sensors, as much in transmission as in reception, for narrow-band acoustical radar systems, using beamforming techniques where the phase and gain characteristics of each one of the channels is not known. Two methodologies of calibration are proposed testing on a real system, analyzing the degradation introduced on the radiation patterns based on the type of calibration, and appointment angle values.
Convention Paper 6344 (Purchase now)

C-6 Audio-Based Self-Localization for Ubiquitous Sensor NetworksBen C. Dalton, V. Michael Bove Jr., Massachusetts Institute of Technology - Cambridge, MA, USA
An active audio self-localization algorithm is described which is effective even for distributed sensor networks with only coarse temporal synchronization. A practical implementation of a simple method of estimating ranges from recordings of reference sounds is used. Pseudo-noise “chirps” are emitted and recorded at each of the nodes. Pair-wise distances are calculated by comparing the difference in the audio delays between the peaks measured in each recording. By removing dependence on fine grained temporal synchronization it is hoped that this technique can be used concurrently across a wide range of devices to better leverage the existing audio sensing resources that surround us. An implementation of this method using the Smart Architectural Surfaces development platform is described and assessed. The viability of the method is further demonstrated in a mixed-device ad-hoc sensor network case using existing off-the-shelf technology.
Convention Paper 6345 (Purchase now)


 
©2005 Audio Engineering Society, Inc.