AES New York 2019
Paper Session P13

P13 - Spatial Audio, Part 2

Friday, October 18, 9:00 am — 11:00 am (1E10)

Doyuen Ko, Belmont University - Nashville, TN, USA

P13-1 Simplified Source Directivity Rendering in Acoustic Virtual Reality Using the Directivity Sample CombinationGeorg Götz, Aalto University - Espoo, Finland; Ville Pulkki, Aalto University - Espoo, Finland
This contribution proposes a simplified rendering of source directivity patterns for the simulation and auralization of auditory scenes consisting of multiple listeners or sources. It is based on applying directivity filters of arbitrary directivity patterns at multiple, supposedly important directions, and approximating the filter outputs of intermediate directions by interpolation. This reduces the amount of required filtering operations considerably and thus increases the computational efficiency of the auralization. As a proof of concept, the simplification is evaluated from a technical as well as from a perceptual point of view for one specific use case. The promising results suggest further studies of the proposed simplification in the future to assess its applicability to more complex scenarios.
Convention Paper 10286 (Purchase now)

P13-2 Classification of HRTFs Using Perceptually Meaningful Frequency ArraysNolan Eley, New York University - New York, NY, USA
Head-related transfer functions (HRTFs) are essential in binaural audio. Because HRTFs are highly individualized and difficult to acquire, much research has been devoted towards improving HRTF performance for the general population. Such research requires a valid and robust method for classifying and comparing HRTFs. This study used a k-nearest neighbor (KNN) classifier to evaluate the ability of several different frequency arrays to characterize HRTFs. The perceptual impact of these frequency arrays was evaluated through a subjective test. Mel-frequency arrays showed the best results in the KNN classification tests while the subjective test results were inconclusive.
Convention Paper 10288 (Purchase now)

P13-3 An HRTF Based Approach towards Binaural Sound Source LocalizationKaushik Sunder, Embody VR - Mountain View, CA, USA; Yuxiang Wang, University of Rochester - Rochester, NY, USA
With the evolution of smart headphones, hearables, and hearing aids there is a need for technologies to improve situational awareness. The device needs to constantly monitor the real world events and cue the listener to stay aware of the outside world. In this paper we develop a technique to identify the exact location of the dominant sound source using the unique spectral and temporal features listener’s head-related transfer functions (HRTFs). Unlike most state-of-the-art beamforming technologies, this method localizes the sound source using just two microphones thereby reducing the cost and complexity of this technology. An experimental framework is setup at the EmbodyVR anechoic chamber, and hearing aid recordings are carried out for several different trajectories, SNRs, and turn-rates. Results indicate that the source localization algorithms perform well for dynamic moving sources for different SNR levels.
Convention Paper 10289 (Purchase now)

P13-4 Physical Controllers vs. Hand-and-Gesture Tracking: Control Scheme Evaluation for VR Audio MixingJustin Bennington, Belmont University - Nashville, TN, USA; Doyuen Ko, Belmont University - Nashville, TN, USA
This paper investigates potential differences in performance for both physical and hand-and-gesture control within a Virtual Reality (VR) audio mixing environment. The test was designed to draw upon prior evaluations of control schemes for audio mixing while presenting sound sources to the user for both controller schemes within VR. A VR audio mixing interface was developed in order to facilitate a subjective evaluation of two control schemes. Response data was analyzed with t- and ANOVA tests. Physical controllers were generally rated higher than the hand-and-gesture controls in terms of perceived accuracy, efficiency, and satisfaction. No significant difference in task completion time for either control scheme was found. The test participants largely preferred the physical controllers over the hand-and-gesture control scheme. There were no significant differences in the ability to make adjustments in general when comparing groups of more experienced and less experienced audio engineers.
Convention Paper 10290 (Purchase now)

Return to Paper Sessions