AES New York 2009
Paper Session P7

P7 - Audio in Multimodal Applications


Saturday, October 10, 9:00 am — 1:00 pm
Chair: Rob Maher, Montana State University - Bozeman, MT, USA

P7-1 Listening within You and without You: Center-Surround Listening in Multimodal DisplaysThomas Santoro, Naval Submarine Medical Research Laboratory (NSMRL) - Groton, CT, USA; Agnieszka Roginska, New York University - New York, NY, USA; Gregory Wakefield, University of Michigan - Ann Arbor, MI, USA
Listener cognitive performance improvements from implementations of enhanced spatialized auditory displays are considered. Investigations of cognitive decision making driven by stimuli organized in “center-surround” auditory-only and visual-only arrangements are described. These new prototype interfaces employing a center-surround organization (“listening within – listening without”) exploit the capability of the auditory and visual modalities for concurrent operation and facilitate their functioning to support cognitive performance in synthetic, immersive environments.
Convention Paper 7865 (Purchase now)

P7-2 A Loudspeaker Design to Enhance the Sound Image Localization on Large Flat DisplaysGabriel Pablo Nava, Keiji Hirata, NTT Communication Science Laboratories - Seika-cho, Kyoto, Japan; Masato Miyoshi, Kanazawa University - Ishikawa-ken, Japan
A fundamental problem in auditory displays implemented with conventional stereo loudspeakers is that correct localization of sound images can be perceived only at the sweet spot and along the symmetrical axis of the stereo array. Although several signal processing-based techniques have been proposed to expand the listening area, less research on the loudspeaker configuration has been reported. This paper describes a new loudspeaker design that enhances the localization of sound images on the surface of flat display panels over a wide listening area. Numerical simulations of the acoustic field radiated, and subjective tests performed using a prototype panel show that the simple principle of the design effectively modifies the radiation pattern so as to widen the listening area.
Convention Paper 7866 (Purchase now)

P7-3 A Method for Multimodal Auralization of Audio-Tactile Stimuli from Acoustic and Structural MeasurementsClemeth Abercrombie, Jonas Braasch, Rensselaer Polytechnic Institute - Troy, NY, USA
A new method for the reproduction of sound and vibration for arbitrary musical source material based on physical measurements is presented. Tactile signals are created by the convolution of “uncoupled” vibration with impulse responses derived from mechanical impedance measurements. Audio signals are created by the convolution of anechoic sound with binaural room impulse responses. Playback is accomplished through headphones and a calibrated motion platform. Benefits of the method include the ability to make multimodal, side-by-side listening tests for audio-tactile stimuli perceived in real music performance situations. Details of the method are discussed along with obstacles and applications. Structural response measurements are presented as validation of the need for measured vibration signals in audio-tactile displays.
Convention Paper 7867 (Purchase now)

P7-4 "Worms" in (E)motion: Visualizing Emotions Evoked by MusicFrederik Nagel, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Reinhard Kopiez, Hanover University of Music and Drama - Hanover, Germany; Oliver Grewe, Studienstiftung des deutschen Volkes e. V. - Bonn, Germany; Eckart Altenmüller, Hanover University of Music and Drama - Hanover, Germany
Music plays an important role in everyday human life. One important reason for this is the capacity of music to influence listeners’ emotions. This study describes the application of a recently developed interface for the visualization of emotions felt while listening to music. Subjects (n = 38) listened to 7 musical pieces of different styles. They were asked to report their own emotions, felt in real-time, in a two-dimensional emotion space using computer software. Films were created from the time series of all self-reports as a synopsis. This technique of data visualization allows an appealing method of data analysis while providing the opportunity to investigate commonalities of emotional self-reports as well as differences between subjects. In addition to presenting the films, the authors of this study also discuss its possible applications in areas such as social sciences, musicology, and the music industry.
Convention Paper 7868 (Purchase now)

P7-5 Enhanced Automatic Noise Removal Platform for Broadcast, Forensic, and, Mobile ApplicationsShamail Saeed, Harinarayanan E.V., ATC Labs - Noida, India; Deepen Sinha, ATC Labs - Chatham, NJ, USA; Balaji V., ATC Labs - Noida, India
We present new enhancements and additions to our novel Adaptive/Automatic Wide Band Noise Removal (AWNR) algorithm proposed earlier. AWNR uses a novel framework employing dominant component subtraction followed by adaptive Kalman filtering and subsequent restoration of the dominant components. The model parameters for Kalman filtering are estimated utilizing a multi-component Signal Activity Detector (SAD) algorithm. The enhancements we present here include two enhancements to the core filtering algorithm, including the use of a multi-band filtering framework as well as a color noise model. In the first case it is shown how the openness of the filtered signal improves through the use of a two band structure with independent filtering. The use of color noise model, on the other hand, improves the level of filtering for wider types of noises. We also describe two other structural enhancements to the AWNR algorithm that allow it to better handle respectively dual microphone recording scenarios and forensic/restoration applications. Using an independent capture from a noise microphone the level of filtering is substantially increased. Furthermore for forensic applications a two/multiple pass filtering framework in which SAD profiles may be fine tuned using manual intervention are desirable.
Convention Paper 7869 (Purchase now)

P7-6 Catch Your Breath— Musical Biofeedback for Breathing RegulationDiana Siwiak, Jonathan Berger, Yao Yang, Stanford University - Stanford, CA, USA
Catch Your Breath is an interactive musical biofeedback system adapted from a project designed to reduce respiratory irregularity in patients undergoing 4D-CT scans for oncological diagnosis. The medical application system is currently implemented and undergoing assessment as a means to reduce motion-induced distortion in CT images. The same framework was implemented as an interactive art installation. The principle is simple—the subjects breathing motion is tracked via video camera using fiducial markers, and interpreted as a real-time variable tempo adjustment to a MIDI file. The subject adjusts breathing to synchronize with a separate accompaniment line. When the subjects breathing is regular and at the desired tempo, the audible result sounds synchronous and harmonious. The accompaniments tempo gradually decreases, which causes breathing to synchronize and slow down, thus increasing relaxation.
Convention Paper 7870 (Purchase now)

P7-7 Wavefield Synthesis for Interactive Sound InstallationsGrace Leslie, IRCAM - Paris, France, University of California, San Diego, La Holla, CA, USA; Diemo Schwarz, Olivier Warusfel, Frédéric Bevilacqua, Pierre Jodlowski, IRCAM - Paris, France
Wavefield synthesis (WFS), the spatialization of audio through the recreation of a virtual source’s wavefront, is uniquely suited to interactive applications where listeners move throughout the rendering space and more than one listener is involved. This paper describes the features of WFS that make it useful for interactive applications, and takes a recent project at IRCAM as a case study that demonstrates these advantages. The interactive installation GrainStick was developed as a collaboration between the composer Pierre Jodlowski and the European project Sound And Music For Everyone Everyday Everywhere Everyway (SAME) at IRCAM, Paris. The interaction design of GrainStick presents a new development in multimodal interfaces and multichannel sound by allowing users control of their auditory scene through gesture analysis performed on infrared camera motion tracking and accelerometer data.
Convention Paper 7871 (Purchase now)

P7-8 Eidola: An Interactive Augmented Reality Audio-Game PrototypeNikolaos Moustakas, Andreas Floros, Nikolaos-Grigorios Kanellopoulos, Ionian University - Corfu, Greece
Augmented reality audio represents a new trend in digital technology that enriches the real acoustic environment with synthesized sound produced by virtual sound objects. On the other hand, an audio-game is based only on audible feedback rather than on visual. In this paper an audio-game prototype is presented that takes advantage of the characteristics of augmented reality audio for realizing more complex audio-game scenarios. The prototype was realized as an audiovisual interactive installation, allowing the further involvement of the physical game space as the secondary component for constructing the game ambient environment. A sequence of tests has shown that the proposed prototype can efficiently support complex game scenarios provided that the necessary advanced interaction paths are available.
Convention Paper 7872 (Purchase now)