AES New York 2019
Engineering Brief EB3
EB3 - Posters: Spatial Audio
Friday, October 18, 11:00 am — 12:30 pm (South Concourse A)
EB3-1 Comparing Externalization Between the Neumann KU100 Versus Low Cost DIY Binaural Dummy Head—Kelley DiPasquale, SUNY Potsdam, Crane School of Music - Potsdam, NY, USA
Music is usually recorded using traditional microphone techniques. With technology continually advancing, binaural recording has become more popular, that is, a recording where two microphones are used to create a three-dimensional stereo image. Commercially available binaural heads are prohibitively expensive and not practical for use in typical educational environments or for casual use in a home studio. This experiment consisted of gathering recorded stimuli with a homemade binaural head and the Neumann KU 100. The recordings were played back for 34 subjects instructed to rate the level of externalization for each example. The study investigates whether a homemade binaural head made for under $500 can externalize sound as well as a commercially available binaural head the Neumann KU 100.
Engineering Brief 535 (Download now)
EB3-2 SALTE Pt. 1: A Virtual Reality Tool for Streamlined and Standardized Spatial Audio Listening Tests—Daniel Johnston, University of York - York, UK; Benjamin Tsui, University of York - York, UK; Gavin Kearney, University of York - York, UK
This paper presents SALTE (Spatial Audio Listening Test Environment), an open-source framework for creating spatial audio perceptual testing within virtual reality (VR). The framework incorporates standard test paradigms such as MUSHRA, 3GPP TS 26.259 and audio localization. The simplified drag-and-drop user interface facilitates rapid and robust construction of customized VR experimental environments within Unity3D without any prior knowledge of the game engine or the C# coding language. All audio is rendered by the dedicated SALTE audio renderer which is controlled by dynamic participant data sent via Open Sound Control (OSC). Finally, the software is capable of exporting all experimental conditions such as visuals, participant interaction mechanisms, and test parameters allowing for streamlined and standardized comparable data within and in-between organizations.
Engineering Brief 536 (Download now)
EB3-3 SALTE Pt. 2: On the Design of the SALTE Audio Rendering Engine for Spatial Audio Listening Tests in VR—Tomasz Rudzki, University of York - York, UK; Chris Earnshaw, University of York - York, UK; Damian Murphy, University of York - York, UK; Gavin Kearney, University of York - York, UK
The dedicated audio rendering engine for conducting listening experiments using the SALTE (Spatial Audio Listening Test Environment) open-source virtual reality framework is presented. The renderer can be used for controlled playback of Ambisonic scenes (up to 7th order) over headphones and loudspeakers. Binaural-based Ambisonic rendering facilitates the use of custom HRIRs contained within separate WAV ?les or SOFA ?les as well as head tracking. All parameters of the audio rendering software can be controlled in real-time by the SALTE graphical user interface. This allows for perceptual evaluation of Ambisonic scenes and different decoding schemes using custom HRTFs.
Engineering Brief 537 (Download now)
EB3-4 Mixed Reality Collaborative Music—Andrea Genovese, New York University - New York, NY, USA; Marta Gospodarek, New York University - New York, NY, USA; Agnieszka Roginska, New York University - New York, NY, USA
This work illustrates a virtual collaborative experience between a real-time musician and virtual game characters based on pre-recorded performers. A dancer and percussionists have been recorded with microphones and a motion capture system so that their data can be converted into game avatars able to be reproduced within VR/AR scenes. The live musician was also converted into a virtual character and rendered in VR, and the whole scene was observable by an audience wearing HMDs. The acoustic character between the live and pre-recorded audio was matched in order to blend the music into a cohesive mixed reality scene and address the viewer's expectations set by the real-world elements. Presentation only; not available in E-Library]
EB3-6 Field Report: Immersive Recording of a Wind Ensemble Using Height Channels and Delay Compensation for a Realistic Playback Experience—Hyunjoung Yang, McGill University - Montréal, QC, Canada; Alexander Dobson, McGill University - Montreal, QC, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada
Practical examples for orchestra recording in stereo or surround were relatively easy to obtain. Whereas, it was found that recording practice in immersive audio is relatively limited. This paper is intended to share the experience of the immersive recording process for a wind orchestra recording at McGill University. There were concerns that needed to be considered before planning the concert recording, problems encountered during the planning, and lastly the solutions to these issues. In conclusion, the discussions about the final result and the approach will be described.
Engineering Brief 538 (Download now)