AES Show: Make the Right Connections Audio Engineering Society

AES San Francisco 2008
Paper Session P18

Saturday, October 4, 2:30 pm — 4:00 pm

P18 - Innovative Audio Applications

Chair: Cynthia Bruyns-Maxwell, University of California Berkeley - Berkeley, CA, USA

P18-1 An Audio Reproduction Grand Challenge: Design a System to Sonic Boom an Entire HouseVictor W. Sparrow, Steven L. Garrett, Pennsylvania State University - University Park, PA, USA
This paper describes an ongoing research study to design a simulation device that can accurately reproduce sonic booms over the outside surface of an entire house. Sonic booms and previous attempts to reproduce them will be reviewed. The authors will present some calculations that suggest that it will be very difficult to produce the required pressure amplitudes using conventional sound reinforcement electroacoustic technologies. However, an additional purpose is to make AES members aware of this research and to solicit feedback from attendees prior to a January 2009 down-selection activity for the design of an outdoor sonic boom simulation system.
Convention Paper 7607 (Purchase now)

P18-2 A Platform for Audiovisual Telepresence Using Model- and Data-Based Wave-Field SynthesisGregor Heinrich, Fraunhofer Institut für Graphische Datenverarbeitung (IGD) - Darmstadt, Germany, and vsonix GmbH, Darmstadt, Germany; Christoph Jung, Volker Hahn, Michael Leitner, Fraunhofer Institut für Graphische Datenverarbeitung (IGD) - Darmstadt, Germany
We present a platform for real-time transmission of immersive audiovisual impressions using model- and data-based audio wave-field analysis/synthesis and panoramic video capturing/projection. The audio sub-system focused on in this paper is based on circular cardioid microphone and weakly directional loudspeaker arrays. We report on both linear and circular setups that feed different wave-field synthesis systems. While we can report on perceptual results for the model-based wave-field synthesis prototypes with beamforming and supercardioid input, we present findings for the data-based approach derived using experimental simulations. This data-based wave-field analysis/synthesis (WFAS) approach uses a combination of cylindrical-harmonic decomposition of cardioid array signals and angular windowing to enforce causal propagation of the synthesized field. Specifically, our contributions include (1) the creation of a high-resolution reproduction environment that is omnidirectional in both the auditory and visual modality, as well as (2) a study of data-based WFAS for real-time holophonic reproduction with realistic microphone directivities.
Convention Paper 7608 (Purchase now)

P18-3 SMART-I2: “Spatial Multi-User Audio-Visual Real-Time Interactive Interface”Marc Rébillat, University of Paris Sud - Paris, France; Etienne Corteel, sonic emotion ag - Oberglatt, Switzerland; Brian F. Katz, University of Paris Sud - Paris, France
The SMART-I2 aims at creating a precise and coherent virtual environment by providing users both audio and visual accurate localization cues. It is known that, for audio rendering, Wave Field Synthesis, and for visual rendering, Tracked Stereoscopy, individually permit spatially high quality immersion within an extended space. The proposed system combines these two rendering approaches through the use of a large multi-actuator panel used as both a loudspeaker array and as a projection screen, considerably reducing audio-visual incoherencies. The system performance has been confirmed by an objective validation of the audio interface and a perceptual evaluation of the audio-visual rendering.
Convention Paper 7609 (Purchase now)