145th AES CONVENTION Engineering Brief EB05: Spatial Audio

AES New York 2018
Engineering Brief EB05

EB05 - Spatial Audio


Saturday, October 20, 9:00 am — 10:00 am (1E11)

Chair:
Gavin Kearney, University of York - York, UK

EB05-1 Creative Approach to Audio in Corporate Brand ExperiencesAlexander Mayo, Arup - New York, NY, USA; Nathan Blum, Arup - New York, NY, USA; Leonard Roussel, Arup - New York, NY, USA
Corporate clients have become focused on turning their digital platforms into human experiences. As designers, we must create unique solutions that speak to the corporation's brand identity. For the Lobby at Salesforce New York, the Salesforce “Trailblazer” brand is brought to life through a sonic landscape. A spatial audio system and custom composition is paired with immersive lighting and LED displays to create a multimedia entry into Salesforce. This 3D sound system installation: • Reinforces the Salesforce brand and is inspired by their events; • Employs multiple loudspeakers, network audio transport, and integrated control; • Makes use of audio samples arranged in a composition which utilizes an object-based spatialization engine; •Integrates seamlessly with architectural features and building systems.
Engineering Brief 475 (Download now)

EB05-2 Subspace-Based HRTF Synthesis from Sparse Data: A Joint PCA and ML-Based ApproachSunil G. Bharitkar, HP Labs., Inc. - San Francisco, CA, USA; Timothy Mauer, HP, Inc. - Vancouver, WA, USA; Teresa Wells, HP, Inc. - San Francisco, CA, USA; David Berfanger, HP, Inc. - Vancouver, WA, USA
Head-related transfer functions (HRTF) are used for creating the perception of a virtual sound source at an arbitrary azimuth-elevation. Publicly available databases use a subset of these directions due to physical constraints (viz., loudspeakers for generating the stimuli not being point-sources) and the time required to acquire and deconvolve responses for a large number of spatial directions. In this paper we present a subspace-based technique for reconstructing HRTFs at arbitrary directions for the IRCAM-Listen HRTF database, which comprises a set of HRTFs sampled every 15 deg along the azimuth direction. The presented technique includes first augmenting the sparse IRCAM dataset using the concept of auditory localization blur, then deriving a set of P=6 principal components, using PCA for the original and augmented HRTFs, and then training a neural network (ANN) with these directional principal components. The reconstruction of HRTF corresponding to an arbitrary direction is achieved by post-multiplying the ANN output, comprising the estimated six principal components, with a frequency weighting matrix. The advantage of using a subspace approach, involving only 6 principal components, is to obtain a low complexity HRTF synthesis ANN-based model as compared to training an ANN model to output an HRTF over all frequencies. Objective results demonstrate a reasonable interpolation with the presented approach.
Engineering Brief 476 (Download now)

EB05-3 Audio Application Programming Interface for Mixed RealityRémi Audfray, Magic Leap, Inc. - San Francisco, CA, USA; Jean-Marc Jot, Magic Leap - San Francisco, CA, USA; Sam Dicker, Magic Leap, Inc. - San Francisco, CA, USA
In mixed reality (MR) applications, digital audio objects are rendered via an acoustically transparent playback system to blend with the physical surroundings of the listener. This requires a binaural simulation process that perceptually matches the reverberation properties of the local environment, so that virtual sounds are not distinguishable from real sounds emitted around the listener. In this paper we propose an acoustic scene programming model that allows pre-authoring the behaviors and trajectories of a set of sound sources in a MR audio experience, while deferring to rendering time the specification of the reverberation properties of the enclosing room.
Engineering Brief 477 (Download now)

EB05-4 Accessible Object-Based Audio Using Hierarchical Narrative Importance MetadataLauren Ward, University of Salford - Salford, UK; Ben Shirley, University of Salford - Salford, Greater Manchester, UK; Salsa Sound Ltd - Salford, Greater Manchester, UK; Jon Francombe, BBC Research and Development - Salford, UK
Object-based audio has great capacity for production and delivery of adaptive and personalizable content. This can be used to improve the accessibility of complex content for listeners with hearing impairments. An adaptive object-based audio system was used to make mix changes enabling listeners to balance narrative comprehension against immersion using a single dial. Performance was evaluated by focus groups of 12 hearing impaired participants who gave primarily positive feedback. An experienced sound designer also evaluated the function of the control and process for authoring the necessary metadata establishing that the control facilitated a clearer narrative while maintaining mix quality. In future work the algorithm, production tools, and interface will be refined based on the feedback received.
Engineering Brief 478 (Download now)


Return to Engineering Briefs