J. Jot, and KE. SU. Lee, "Augmented Reality Headphone Environment Rendering," Paper 8-2, (2016 September.). doi:
J. Jot, and KE. SU. Lee, "Augmented Reality Headphone Environment Rendering," Paper 8-2, (2016 September.). doi:
Abstract: In headphone-based augmented reality audio applications, computer-generated audio-visual objects are rendered over headphones or ear buds and blended into a natural audio environment. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not distinguishable from sounds occurring naturally or reproduced over loudspeakers. Solutions involving the measurement or calculation of binaural room impulse responses in a consumer environment are limited by practical obstacles and complexity. We propose an approach exploiting a statistical reverberation model, enabling practical acoustical environment characterization and computationally efficient reflection and reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality”–enabling natural-sounding, externalized virtual 3-D audio reproduction of music, movie or game soundtracks.
@article{jot2016augmented,
author={jot, jean-marc and lee, keun sup},
journal={journal of the audio engineering society},
title={augmented reality headphone environment rendering},
year={2016},
volume={},
number={},
pages={},
doi={},
month={september},}
@article{jot2016augmented,
author={jot, jean-marc and lee, keun sup},
journal={journal of the audio engineering society},
title={augmented reality headphone environment rendering},
year={2016},
volume={},
number={},
pages={},
doi={},
month={september},
abstract={in headphone-based augmented reality audio applications, computer-generated audio-visual objects are rendered over headphones or ear buds and blended into a natural audio environment. this requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not distinguishable from sounds occurring naturally or reproduced over loudspeakers. solutions involving the measurement or calculation of binaural room impulse responses in a consumer environment are limited by practical obstacles and complexity. we propose an approach exploiting a statistical reverberation model, enabling practical acoustical environment characterization and computationally efficient reflection and reverberation rendering for multiple virtual sound sources. the method applies equally to headphone-based “audio-augmented reality”–enabling natural-sounding, externalized virtual 3-d audio reproduction of music, movie or game soundtracks.},}
TY - paper
TI - Augmented Reality Headphone Environment Rendering
SP -
EP -
AU - Jot, Jean-Marc
AU - Lee, Keun Sup
PY - 2016
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - September 2016
TY - paper
TI - Augmented Reality Headphone Environment Rendering
SP -
EP -
AU - Jot, Jean-Marc
AU - Lee, Keun Sup
PY - 2016
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - September 2016
AB - In headphone-based augmented reality audio applications, computer-generated audio-visual objects are rendered over headphones or ear buds and blended into a natural audio environment. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not distinguishable from sounds occurring naturally or reproduced over loudspeakers. Solutions involving the measurement or calculation of binaural room impulse responses in a consumer environment are limited by practical obstacles and complexity. We propose an approach exploiting a statistical reverberation model, enabling practical acoustical environment characterization and computationally efficient reflection and reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality”–enabling natural-sounding, externalized virtual 3-D audio reproduction of music, movie or game soundtracks.
In headphone-based augmented reality audio applications, computer-generated audio-visual objects are rendered over headphones or ear buds and blended into a natural audio environment. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not distinguishable from sounds occurring naturally or reproduced over loudspeakers. Solutions involving the measurement or calculation of binaural room impulse responses in a consumer environment are limited by practical obstacles and complexity. We propose an approach exploiting a statistical reverberation model, enabling practical acoustical environment characterization and computationally efficient reflection and reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality”–enabling natural-sounding, externalized virtual 3-D audio reproduction of music, movie or game soundtracks.
Authors:
Jot, Jean-Marc; Lee, Keun Sup
Affiliations:
DTS, Inc., Los Gatos, CA, USA; Apple Inc., Cupertino, CA, USA(See document for exact affiliation information.)
AES Conference:
2016 AES International Conference on Audio for Virtual and Augmented Reality (September 2016)
Paper Number:
8-2
Publication Date:
September 21, 2016Import into BibTeX
Subject:
Capture, Rendering, and Mixing for VR
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=18506