AES New York 2011
Paper Session P7

P7 - Sound Field Analysis and Reproduction—Part 1


Thursday, October 20, 4:30 pm — 6:30 pm (Room: 1E09)

Chair:
Sascha Spors, Deutsche Telekom Laboratories, Technische Universität Berlin - Berlin, Germany

P7-1 Two Physical Models for Spatially Extended Virtual Sound SourcesJens Ahrens, Sascha Spors, Deutsche Telekom Laboratories, Technische Universität Berlin - Berlin, Germany
We present physical models for the sound field radiated by plates of finite size and spheres vibrating in higher modes. The intention is obtaining a model that allows for controlling the perceived size of a virtual sound source in model-based sound field synthesis. Analytical expressions for radiated sound fields are derived and simulations of the latter are presented. An analysis of interaural coherence in a virtual listener, which has been shown to serve as an indicator for perceived spatial extent, provides an initial proof of concept.
Convention Paper 8483 (Purchase now)

P7-2 Auditory Depth Control: A New Approach Utilizing a Plane Wave Loudspeaker Radiating from above a ListenerSungyoung Kim, Hiraku Okumura, Hideki Sakanashi, Takurou Sone, Yamaha Corporation - Hamamatsu, Japan
One of the distinct features of a 3-D image is that the depth perceived by viewer is controlled so that objects appear to project toward the viewers. However, it has been hard to move auditory imagery near to listeners using conventional loudspeakers and panning algorithms. In this study we proposed a new system for controlling auditory depth, which incorporates two loudspeakers: one that radiates sound from the front of a listener and another that radiates plane waves from above a listener. With additional equalization that removes spectral cues corresponding to elevation, the proposed system generates an auditory image "near a listener" and controls the depth perceived by the listener, thereby enhancing the listener’s perception of 3-D sound.
Convention Paper 8484 (Purchase now)

P7-3 The SCENIC Project: Space-Time Audio Processing for Environment-Aware Acoustic Sensing and RenderingPaolo Annibale, University of Erlangen - Erlangen, Germany; Fabio Antonacci, Paolo Bestagini, Politecnico di Milano - Milan, Italy; Alessio Brutti, Fondazione Bruno Kessler – IRST - Trento, Italy; Antonio Canclini, Politecnico di Milano - Milan, Italy; Luca Cristoforetti, Fondazione Bruno Kessler – IRST - Trento, Italy; Emanuël Habets, University of Erlangen - Erlangen, Germany and Imperial College London, London, UK; J. Filos, Walter Kellerman, Konrad Kowalczyk, Anthony Lombard, Edwin Mabande, University of Erlangen - Erlangen, Germany; Dejan Markovic, Politecnico di Milano - Milan, Italy; Patrick Naylor, Imperial College London - London, UK; Maurizio Omologo, Fondazione Bruno Kessler – IRST, Trento, Italy; Rudolf Rabenstein, University of Erlangen, Erlangen, Germany; Augusto Sarti, Politecnico di Milano, Milan, Italy; Piergiorgio Svaizer, Fondazione Bruno Kessler – IRST, Trento, Italy; Mark Thomas, I
SCENIC is an EC-funded project aimed at developing a harmonized corpus of methodologies for environment-aware acoustic sensing and rendering. The project focuses on space-time acoustic processing solutions that do not just accommodate the environment in the modeling process but that make the environment help toward achieving the goal at hand. The solutions developed within this project cover a wide range of applications, including acoustic self-calibration, aimed at estimating the parameters of the acoustic system; environment inference, aimed at identifying and characterizing all the relevant acoustic reflectors in the environment. The information gathered through such steps is then used to boost the performance of wavefield rendering methods as well as source localization/characterization/extraction in reverberant environments.
Convention Paper 8485 (Purchase now)

P7-4 Object-Based Sound Re-Mix for Spatially Coherent Audio Rendering of an Existing Stereoscopic-3-D Animation MovieMarc Evrard, University of Liege - Liege, Belgium; Cédric R. André, University of Liege - Liege, Belgium, LIMSI-CNRS, Orsay, France; Jacques G. Verly, Jean-Jacques Embrechts, University of Liege - Liege, Belgium; Brian F. G. Katz, LIMSI-CNRS - Orsay, France
While 3-D cinema is becoming more mainstream, little effort has focused on the general problem of producing a 3-D sound scene spatially coherent with the visual content of a stereoscopic-3-D (s-3D) movie. The perceptual relevance of such spatial audiovisual coherence is of significant interest. In order to carry out such experiments, it is necessary to have an appropriate s-3D movie and its corresponding 3-D audio track. This paper presents the procedure followed to obtain this joint 3-D video and audio content from an exiting animated s-3D film, problems encountered, and some of the solutions employed.
Convention Paper 8486 (Purchase now)

Information Last Updated: 20111005, mei


Return to Paper Sessions