AES New York 2009
Poster Session P9
P9 - Spatial Audio
Saturday, October 10, 10:00 am — 11:30 am
P9-1 Surround Sound Track Productions Based on a More Channel Headphone—Florian M. Koenig, Ultrasone AG & Florian König Enterprises GmbH - Germering, Germany
Stereo headphones have become very popular in the last few years. One reason was the development around mp3 and its huge market acceptance. Meanwhile, portable surround sound devices could be the successor of “stereo” applications in consumer electronics, multimedia, and games. Some problems are 3-D sound obstructive: humans need individual reproduced binaural signals (HRTF ~ outer ear shape) due to all types of headphone uses. Additionally, infants sound coloration cognition is different from adults, who investigate! Commercial headphones need an adaptive realistic 3-D image over all headphone sound sources (TV, CD, mp3 / mobile phone) with a minimum of elevation effect and a virtual distance perception. We realized surround sound mixings with a 4.0 headphone 5.1 loudspeaker compatible; they can be demonstrated as well.
Convention Paper 7880 (Purchase now)
P9-2 Reconstruction and Evaluation of Dichotic Room Reverberation for 3-D Sound Generation—Keita Tanno, Akira Saji, Huakang Li, Tatsuya Katsumata, Jie Huang, The University of Aizu - Aizu-Wakamatsu, Fukushima, Japan
Artificial reverberation is often used to increase reality and prevent the in-the-head localization in a headphone-based 3-D sound system. In traditional methods, diotic reverberations were used. In this research, we measured the impulse responses of some rooms by a Four Point Microphone method, and calculated the sound intensity vectors by the Sound Intensity method. From the sound intensity vectors, we obtained the image sound sources. Dichotic reverberation was reconstructed by the estimated image sound sources. Comparison experiments were conducted for three kinds of reverberations, i.e., diotic reverberations, dichotic reverberations, and dichotic reverberations added with Head-Related Transfer Functions (HRTF). From the results, we could clarify the 3-D sounds reconstructed by dichotic reverberations with Head-Related Transfer Functions have more spatial extension than other methods.
Convention Paper 7881 (Purchase now)
P9-3 Reproduction 3-D Sound by Measuring and Construction of HRTF with Room Reverberation—Akira Saji, Keita Tanno, Jie Huang, The University of Aizu - Aizu-Wakamatsu, Fukushima, Japan
In this paper we propose a new method using HRTFs that contain room reverberations (R-HRTF). The reverberation is not added to the dry sound source separated with HRTF but contained at their measured process in the HRTFs. We measured the HRTFs in a real reverberant environment for directions of azimuth 0, 45, 90, 135 (left side) and elevation from 0 to 90 (step of 10 degrees) degrees, then constructed a 3-D sound system with the measured R-HRTF with headphones and examined if the sound reality is improved. As a result, we succeed in creating a 3-D spatial sound system with more reality compared with a traditional HRTFs sound system by signal processing.
Convention Paper 7882 (Purchase now)
P9-4 3-D Sound Synthesis of a Honeybee Swarm—Jussi Pekonen, Antti Jylhä, Helsinki University of Technology - Espoo, Finland
Honeybee swarms are characterized by their buzzing sound, which can be very impressive close to a hive. We present two techniques and their real-time sound synthesis of swarming honeybees in 3-D multichannel setting. Both techniques are based on a source-filter model using a sawtooth oscillator with all-pole equalization filter. The synthesis is controlled by the motion of the swarm, which is modeled in two different ways: as a set of coupled individual bees or with a swarming algorithm. The synthesized sound can be spatialized using the location information generated by the model. The proposed methods are capable of producing a realistic honeybee swarm effect to be used in, e.g., virtual reality applications.
Convention Paper 7883 (Purchase now)
P9-5 An Investigation of Early Reflection’s Effect on Front-Back Localization in Spatial Audio—Darrin Reed, Robert C. Maher, Montana State University - Bozeman, MT, USA
In a natural sonic environment a listener is accustomed to hearing reflections and reverberation. It is conceived that early reflections could reduce front-back confusion in synthetic 3-D audio. This paper describes an experiment to determine whether or not simulated reflections can reduce front-back confusion for audio presented with nonindividualized HRTFs via headphones. Although the simple addition of a single-order reflection is not shown to eliminate all front-back confusions, some cases of lateral reflections from a side boundary can be shown to both assist and inhibit localization ability depending on the relationship of the source, observer, and reflective boundary.
Convention Paper 7884 (Purchase now)
P9-6 Some Further Investigations on Estimation of HRIRs from Impulse Responses Acquired in Ordinary Sound Fields—Shouichi Takane, Akita Prefectural University - Akita, Japan
The author’s group proposed the method for estimation of Head-Related Transfer Functions (HRTFs) or its corresponding impulse response referred to as Head-Related Impulse Responses (HRIRs) [Takane et al., Proc. JCA(2007)]. In this paper the proposed method was further investigated in two parameters affecting the performance of the estimation. The first parameter is the order of AR coefficients, indicating how many past samples are assumed to be related to the current sample. It was found that Signal-to-Deviation Ratio (SDR) was improved by using the proposed method when the order of AR coefficients was about a half of the cutout points. The second parameter is the number of samples used for the computation of AR coefficients. It was shown from the results that SDR was greatly improved when this number corresponds to the duration of the response. This indicates the proposed method properly works in ideal situations.
Convention Paper 7885 (Purchase now)
P9-7 Virtual Ceiling Speaker: Elevating Auditory Imagery in a 5-Channel Reproduction—Sungyoung Kim, Masahiro Ikeda, Akio Takahashi, Yusuke Ono, Yamaha Corp. - Iwata, Shizuoka, Japan; William L. Martens, University of Sydney - Sydney, NSW, Australia
In this paper we propose a novel signal processing method called Virtual Ceiling Speaker (VCS) that creates virtually elevated auditory imagery via a 5-channel reproduction system. The proposed method is based on transaural crosstalk cancellation using three channels: center, left-surround, and right-surround. The VCS reproduces a binaurally elevated signal via two surround loudspeakers that inherently reduce transaural crosstalk, while the residual crosstalk component is suppressed by a center channel signal that is optimized for natural perception of elevated sound. Subjective evaluations show that the virtually elevated auditory imagery maintains similar perceptual characteristics when compared to sound produced from an elevated loudspeaker. Moreover, the elevated sound contributes to an enhanced sense of musical expressiveness and spatial presence in music reproduction.
Convention Paper 7886 (Purchase now)
P9-8 Spatial Soundfield Reproduction with Zones of Quiet—Thushara Abhayapala, Yan Jennifer Wu, Australian National University - Canberra, Australia
Reproduction of a spatial soundfield in an extended region of open space with a designated quiet zone is a challenging problem in audio signal processing. We show how to reproduce a given spatial soundfield without altering a nearby quiet zone. In this paper we design a spatial band stop filter over the zone of quiet to suppress the interference from the nearby desired soundfield. This is achieved by using the higher order spatial harmonics to cancel the undesirable effects of the lower order harmonics of the desired soundfield on the zone of quiet. We illustrate the theory and design by simulating a 2-D spatial soundfield.
Convention Paper 7887 (Purchase now)