144th AES CONVENTION Paper Session P12: Spatial Audio-Part 2

AES Milan 2018
Paper Session P12

P12 - Spatial Audio-Part 2

Thursday, May 24, 13:30 — 15:30 (Scala 4)

Stefania Cecchi, Universitá Politecnica della Marche - Ancona, Italy

P12-1 Binaural Room Impulse Responses Interpolation for Multimedia Real-Time ApplicationsVictor Garcia-Gomez, Universitat Politecnica de Valencia - Valencia, Valencia, Spain; Jose J. Lopez, Universidad Politecnica de Valencia - Valencia, Spain
In this paper a novel method for the interpolation of Binaural Room Impulse Responses (BRIR) is presented. The algorithm is based on decomposition in time and frequency of the BRIRs combined with an elaborated peak searching and matching algorithm for the early reflections followed by interpolation. The algorithm has been tested with real room data, carrying out a perceptual subjective test. It outperforms, both in quality and in computational cost, the state-of-the-art algorithms.
Convention Paper 9962 (Purchase now)

P12-2 Evaluation of Binaural Renderers: LocalizationGregory Reardon, New York University - New York, NY, USA; Andrea Genovese, New York University - New York, NY, USA; Gabriel Zalles, New York University - New York, NY, USA; Patrick Flanagan, THX Ltd. - San Francisco, CA, USA; Agnieszka Roginska, New York University - New York, NY, USA
Binaural renderers can be used to reproduce spatial audio over headphones. A number of different renderers have recently become commercially available for use in creating immersive audio content. High-quality spatial audio can be used to significantly enhance experiences in a number of different media applications, such as virtual, mixed and augmented reality, computer games, and music and movie. A large multi-phase experiment evaluating six commercial binaural renderers was performed. This paper presents the methodology, evaluation criteria, and main findings of the horizontal-plane source localization experiment carried out with these renderers. Significant differences between renderers’ regional localization accuracy were found. Consistent with previous research, subjects tended to localize better in the front and back of the head than at the sides. Differences between renderer performance at the side regions heavily contributed to their overall regional localization accuracy.
Convention Paper 9963 (Purchase now)

P12-3 Characteristics of Vertical Sound Image with Two Parametric LoudspeakersShigeaki Aoki, Kanazawa Institute of Technology - Nonoichi, Ishikawa, Japan; Kazuhiro Shimizu, Kanazawa Institute of Technology - Nonoichi, Japan; Kouki Itou, Kanazawa Institute of Technology - Nonoichi, Japan
A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. So far, the applications have been limited monaural reproduction sound system. We had discussed characteristics of stereo reproduction with two parametric loudspeakers. In this paper the sound localization in the vertical direction using the upper and lower parametric loudspeakers was confirmed by listening tests. The level difference between the upper and lower parametric loudspeakers were varied as a parameter. The direction of sound localization in the vertical plane was able to be controlled. We obtained interesting characteristics of the left-right sound localization in the horizontal plane. The simple geometrical acoustic model was introduced and analyzed. The analysis led to explain the measured characteristics.
Convention Paper 9964 (Purchase now)

P12-4 Virtual Hemispherical Amplitude Panning (VHAP): A Method for 3D Panning without Elevated LoudspeakersHyunkook Lee, University of Huddersfield - Huddersfield, UK; Dale Johnson, The University of Huddersfield - Huddersfield, UK; Maksims Mironovs, University of Huddersfield - Huddersfield, West Yorkshire, UK
This paper proposes “virtual hemispherical amplitude panning (VHAP),” which is an efficient 3D panning method exploiting the phantom image elevation effect. Research found that a phantom center image produced by two laterally placed loudspeakers would be localized above the listener. Based on this principle, VHAP attempts to position a phantom image over a virtual upper-hemisphere using just four ear-level loudspeakers placed at the listener’s left side, right side, front center, and back center. A constant-power amplitude panning law is applied among the four loudspeakers. A listening test was conducted to evaluate the localization performance of VHAP. Results indicate that the proposed method can enable one to locate a phantom image at various spherical coordinates in the upper hemisphere with some limitations in accuracy and resolution.
Convention Paper 9965 (Purchase now)

Return to Paper Sessions