AES New York 2013
Paper Session P7
P7 - Spatial Audio—Part 1
Thursday, October 17, 4:30 pm — 7:00 pm (Room 1E07)
Wieslaw Woszczyk, McGill University - Montreal, QC, Canada
P7-1 Reproducing Real-Life Listening Situations in the Laboratory for Testing Hearing Aids—Pauli Minnaar, Oticon A/S - Smørum, Denmark; Signe Frølund Albeck, Oticon A/S - Smørum, Denmark; Christian Stender Simonsen, Oticon A/S - Smørum, Denmark; Boris Søndersted, Oticon A/S - Smørum, Denmark; Sebastian Alex Dalgas Oakley, Oticon A/S - Smørum, Denmark; Jesper Bennedbæk, Oticon A/S - Smørum, Denmark
The main purpose of the current study was to demonstrate how a Virtual Sound Environment (VSE), consisting of 29 loudspeakers, can be used in the development of hearing aids. A listening test was done by recording everyday sound scenes with a spherical microphone array that has 32 microphone capsules. The playback in the VSE was implemented by convolving the recordings with inverse filters, which were derived by directly inverting a matrix of 928 measured transfer functions. While listening to 5 sound scenes, 10 hearing-impaired listeners could switch between hearing aid settings in real time, by interacting with a touch screen in a MUSHRA-like test. The setup proves to be very valuable for ensuring that hearing aid settings work well in real-world situations.
Convention Paper 8951 (Purchase now)
P7-2 Measuring Speech Intelligibility in Noisy Environments Reproduced with Parametric Spatial Audio—Teemu Koski, Aalto University - Espoo, Finland; Ville Sivonen, Cochlear Nordic AB - Vantaa, Finland; Ville Pulkki, Aalto University - Espoo, Finland; Technical University of Denmark - Denmark
This work introduces a method for speech intelligibility testing in reproduced sound scenes. The proposed method uses background sound scenes augmented by target speech sources and reproduced over a multichannel loudspeaker setup with time-frequency domain parametric spatial audio techniques. Subjective listening tests were performed to validate the proposed method: speech recognition thresholds (SRT) in noise were measured in a reference sound scene and in a room where the reference was reproduced by a loudspeaker setup. The listening tests showed that for normally-hearing test subjects the method provides nearly indifferent speech intelligibility compared to the real-life reference when using a nine-loudspeaker reproduction setup in anechoic conditions (<0.3 dB error in SRT). Due to the flexible technical requirements, the method is potentially applicable to clinical environments. AES 135th Convention Student Technical Papers Award Cowinner
Convention Paper 8952 (Purchase now)
P7-3 On the Influence of Headphones on Localization of Loudspeaker Sources—Darius Satongar, University of Salford - Salford, Greater Manchester, UK; Chris Pike, BBC Research and Development - Salford, Greater Manchester, UK; University of York - Heslington, York, UK; Yiu W. Lam, University of Salford - Salford, UK; Tony Tew, University of York - York, UK
When validating systems that use headphones to synthesize virtual sound sources, a direct comparison between virtual and real sources is sometimes needed. This paper presents objective and subjective measurements of the influence of headphones on external loudspeaker sources. Objective measurements of the effect of a number of headphone models are given and analyzed using an auditory filter bank and binaural cue extraction. Objective results highlight that all of the headphones had an effect on localization cues. A subjective localization test was undertaken using one of the best performing headphones from the measurements. It was found that the presence of the headphones caused a small increase in localization error but also that the process of judging source location was different, highlighting a possible increase in the complexity of the localization task.
Convention Paper 8953 (Purchase now)
P7-4 Binaural Reproduction of 22.2 Multichannel Sound with Loudspeaker Array Frame—Kentaro Matsui, NHK Science & Technology Research Laboratories - Setagaya, Tokyo, Japan; Akio Ando, NHK Science & Technology Research Laboratories - Setagaya-ku, Tokyo, Japan
NHK has proposed a 22.2 multichannel sound system to be an audio format for future TV broadcasting. The system consists of 22 loudspeakers and 2 low frequency effect loudspeakers for reproducing three-dimensional spatial sound. To allow 22.2 multichannel sound to be reproduced in homes, various reproduction methods that use fewer loudspeakers have been investigated. This paper describes binaural reproduction of 22.2 multichannel sound with a loudspeaker array frame integrated into a flat panel display. The processing for binaural reproduction is done in the frequency domain. Methods of designing inverse filters for binaural processing with expanded multiple control points are proposed to enlarge the listening area outside the sweet spot.
Convention Paper 8954 (Purchase now)
P7-5 An Offline Binaural Converting Algorithm for 3D Audio Contents: A Comparative Approach to the Implementation Using Channels and Objects—Romain Boonen, SAE Institute Brussels - Brussels, Belgium
This paper describes and compares two offline binaural converting algorithms based on HRTFs (Head-Related Transfer Functions) for 3D audio contents. Recognizing the widespread use of headphones by the typical modern audio content consumer, two strategies to binaurally translate the 3D mixes are explored in order to give a convincing 3D aural experience "on the go." Aiming for the best output quality possible and avoiding the compromises inherent to real-time processing, the paper compares the channel- and the object-based models, notably looking respectively into the spectral analysis of channels for usage of HRTFs at intermediate positions between the virtual speakers and the dynamic convolution of the HRTFs with the objects according to their position coordinates in time.
Convention Paper 8955 (Purchase now)