AES New York 2017
Engineering Brief EB06
EB06 - Spatial Audio
Saturday, October 21, 1:30 pm — 3:15 pm (Rm 1E12)
Matthieu Parmentier, francetélévisions - Paris, France
EB06-1 How Streaming Object Based Audio Might Work—Adrian Wisbey, BBC Design and Engineering - London, UK
Object based media is being considered as the future platform model by a number of broadcasting and production organizations. This paper is a personal imagining of how object based broadcasting might be implemented with IP media as the primary distribution whilst still supporting traditional distributions such as FM, DAB and DVB. The examples assume a broadcaster supporting a number of linearly scheduled services providing both live (simulcast ) and on-demand (catch-up) content. An understanding of the basics of object based audio production and broadcasting by the reader is assumed. Whilst this paper specifically discusses audio or radio broadcasting many of the components and requirements are equally valid in a video environment.
Engineering Brief 398 (Download now)
EB06-2 DIY Measurement of Your Personal HRTF at Home: Low-Cost, Fast and Validated—Jonas Reijniers, University of Antwerp - Antwerpen, Belgium; Bart Partoens, University of Antwerp - Antwerp, Belgium; Herbert Peremans, University of Antwerp - Antwerpen, Belgium
The breakthrough of 3D audio has been hampered by the lack of personalized head-related transfer functions (HRTF) required to create realistic 3D audio environments using headphones. In this paper we present a new method for the user to personalize his/her HRTF, similar to the measurement in an anechoic room, yet it is low-cost and can be carried out at home. We compare the resulting HRTFs with those measured in an anechoic room. Subjecting the participants to a virtual localization experiment, we show that they perform significantly better when using their personalized HRTF, compared to a generic HRTF. We believe this method has the potential of opening the way for large scale commercial use of 3D audio through headphones.
Engineering Brief 399 (Download now)
EB06-3 Audio Localization Method for VR Application—Joo Won Park, Columbia University - New York, NY, USA
Audio localization is a crucial component in the Virtual Reality (VR) projects as it contributes to a more realistic VR experience to the users. In this paper a method to implement localized audio that is synced with user’s head movement is discussed. The goal is to process an audio signal real-time to represent three-dimensional soundscape. This paper introduces a mathematical concept, acoustic models, and audio processing that can be applied for general VR audio development. It also provides a detailed overview of an Oculus Rift- MAX/MSP demo.
Engineering Brief 400 (Download now)
EB06-4 Sound Fields Forever: Mapping Sound Fields via Position-Aware Smartphones—Scott Hawley, Belmont University - Nashville, TN, USA; Sebastian Alegre, Belmont University - Nashville, TN, USA; Brynn Yonker, Belmont University - Nashville, TN, USA
Google Project Tango is a suite of built-in sensors and libraries intended for Augmented Reality applications allowing certain mobile devices to track their motion and orientation in three dimensions without the need for any additional hardware. Our new Android app, "Sound Fields Forever," combines locations with sound intensity data in multiple frequency bands taken from a co-moving external microphone plugged into the phone's analog jack. These data are sent wirelessly to a visualization server running in a web browser. This system is intended for roles in education, live sound reinforcement, and architectural acoustics. The relatively low cost of our approach compared to more sophisticated 3D acoustical mapping systems could make it an accessible option for such applications.
Engineering Brief 401 (Download now)
EB06-5 Real-time Detection of MEMS Microphone Array Failure Modes for Embedded Microprocessors—Andrew Stanford-Jason, XMOS Ltd. - Bristol, UK
In this paper we describe an online system for real-time detection of common failure modes of arrays of MEMS microphones. We describe a system with a specific focus on reduced computational complexity for application in embedded microprocessors. The system detects deviations is long-term spectral content and microphone covariance to identify failures while being robust to the false negatives inherent in a passively driven online system. Data collected from real compromised microphones show that we can achieve high rates of failure detection.
Engineering Brief 402 (Download now)
EB06-6 A Toolkit for Customizing the ambiX Ambisonics-to-Binaural Renderer—Joseph G. Tylka, Princeton University - Princeton, NJ, USA; Edgar Choueiri, Princeton University - Princeton, NJ, USA
An open-source collection of MATLAB functions, referred to as the SOFA/ambiX binaural rendering (SABRE) toolkit, is presented for generating custom ambisonics-to-binaural decoders for the ambiX binaural plug-in. Databases of head-related transfer functions (HRTFs) are becoming widely available in the recently-standardized “SOFA format” (spatially-oriented format for acoustics), but there is currently no (easy) way to use custom HRTFs with the ambiX binaural plug-in. This toolkit enables the user to generate custom binaural rendering configurations for the plug-in from any SOFA-formatted HRTFs or to add HRTFs to an existing ambisonics decoder. Also implemented in the toolkit are several methods of HRTF interpolation and equalization. The mathematical conventions, ambisonics theory, and signal processing implemented in the toolkit are described.
Engineering Brief 403 (Download now)
Engineering Brief 404 (Download now)