Localization of Virtual Sounds in Dynamic Listening Using Sparse HRTFs
×
Cite This
Citation & Abstract
Z. Ben-Hur, D. Alon, PH. W.. Robinson, and R. Mehra, "Localization of Virtual Sounds in Dynamic Listening Using Sparse HRTFs," Paper 1-1, (2020 August.). doi:
Z. Ben-Hur, D. Alon, PH. W.. Robinson, and R. Mehra, "Localization of Virtual Sounds in Dynamic Listening Using Sparse HRTFs," Paper 1-1, (2020 August.). doi:
Abstract: Reproduction of virtual sound sources that are perceptually indistinguishable from real-world sounds is impossible without accurate representation of the virtual sound source location. A key component in such a reproduction system is the Head-Related Transfer Function (HRTF), which is different for every individual. In this study, we introduce an experimental setup for accurate evaluation of the localization performance using a spatial sound reproduction system in dynamic listening conditions. The setup offers the possibility of comparing the evaluation results with real-world localization performance, and facilitates testing of different virtual reproduction conditions, such as different HRTFs or different representations and interpolation methods of the HRTFs. Localization experiments are conducted, comparing real-world sound sources with virtual sound sources using high-resolution individual HRTFs, sparse individual HRTFs and a generic HRTF.
@article{ben-hur2020localization,
author={ben-hur, zamir and alon, david and robinson, philip w. and mehra, ravish},
journal={journal of the audio engineering society},
title={localization of virtual sounds in dynamic listening using sparse hrtfs},
year={2020},
volume={},
number={},
pages={},
doi={},
month={august},}
@article{ben-hur2020localization,
author={ben-hur, zamir and alon, david and robinson, philip w. and mehra, ravish},
journal={journal of the audio engineering society},
title={localization of virtual sounds in dynamic listening using sparse hrtfs},
year={2020},
volume={},
number={},
pages={},
doi={},
month={august},
abstract={reproduction of virtual sound sources that are perceptually indistinguishable from real-world sounds is impossible without accurate representation of the virtual sound source location. a key component in such a reproduction system is the head-related transfer function (hrtf), which is different for every individual. in this study, we introduce an experimental setup for accurate evaluation of the localization performance using a spatial sound reproduction system in dynamic listening conditions. the setup offers the possibility of comparing the evaluation results with real-world localization performance, and facilitates testing of different virtual reproduction conditions, such as different hrtfs or different representations and interpolation methods of the hrtfs. localization experiments are conducted, comparing real-world sound sources with virtual sound sources using high-resolution individual hrtfs, sparse individual hrtfs and a generic hrtf.},}
TY - paper
TI - Localization of Virtual Sounds in Dynamic Listening Using Sparse HRTFs
SP -
EP -
AU - Ben-Hur, Zamir
AU - Alon, David
AU - Robinson, Philip W.
AU - Mehra, Ravish
PY - 2020
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - August 2020
TY - paper
TI - Localization of Virtual Sounds in Dynamic Listening Using Sparse HRTFs
SP -
EP -
AU - Ben-Hur, Zamir
AU - Alon, David
AU - Robinson, Philip W.
AU - Mehra, Ravish
PY - 2020
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - August 2020
AB - Reproduction of virtual sound sources that are perceptually indistinguishable from real-world sounds is impossible without accurate representation of the virtual sound source location. A key component in such a reproduction system is the Head-Related Transfer Function (HRTF), which is different for every individual. In this study, we introduce an experimental setup for accurate evaluation of the localization performance using a spatial sound reproduction system in dynamic listening conditions. The setup offers the possibility of comparing the evaluation results with real-world localization performance, and facilitates testing of different virtual reproduction conditions, such as different HRTFs or different representations and interpolation methods of the HRTFs. Localization experiments are conducted, comparing real-world sound sources with virtual sound sources using high-resolution individual HRTFs, sparse individual HRTFs and a generic HRTF.
Reproduction of virtual sound sources that are perceptually indistinguishable from real-world sounds is impossible without accurate representation of the virtual sound source location. A key component in such a reproduction system is the Head-Related Transfer Function (HRTF), which is different for every individual. In this study, we introduce an experimental setup for accurate evaluation of the localization performance using a spatial sound reproduction system in dynamic listening conditions. The setup offers the possibility of comparing the evaluation results with real-world localization performance, and facilitates testing of different virtual reproduction conditions, such as different HRTFs or different representations and interpolation methods of the HRTFs. Localization experiments are conducted, comparing real-world sound sources with virtual sound sources using high-resolution individual HRTFs, sparse individual HRTFs and a generic HRTF.
Open Access
Authors:
Ben-Hur, Zamir; Alon, David; Robinson, Philip W.; Mehra, Ravish
Affiliation:
Facebook Reality Labs, Facebook
AES Conference:
2020 AES International Conference on Audio for Virtual and Augmented Reality (August 2020)
Paper Number:
1-1
Publication Date:
August 13, 2020Import into BibTeX
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=20864