Perceptual Weighting of Binaural Information: Toward an Auditory Perceptual "Spatial Codec" for Auditory Augmented Reality
×
Cite This
Citation & Abstract
G.. CH. Stecker, and A. Diedesch, "Perceptual Weighting of Binaural Information: Toward an Auditory Perceptual "Spatial Codec" for Auditory Augmented Reality," Paper 6-2, (2016 September.). doi:
G.. CH. Stecker, and A. Diedesch, "Perceptual Weighting of Binaural Information: Toward an Auditory Perceptual "Spatial Codec" for Auditory Augmented Reality," Paper 6-2, (2016 September.). doi:
Abstract: Auditory augmented reality (AR) requires accurate estimation of spatial information conveyed in the natural scene, coupled with accurate spatial synthesis of virtual sounds to be integrated within it. Solutions to both problems should consider the capabilities and limitations of the human binaural system, in order to maximize relevant over distracting acoustic information and enhance perceptual integration across AR layers. Recent studies have measured how human listeners integrate spatial information across multiple conflicting cues, revealing patterns of “perceptual weighting” that sample the auditory scene in a robust but spectrotemporally sparse manner. Such patterns can be exploited for binaural analysis and synthesis, much as time-frequency masking patterns are exploited by perceptual audio codecs, to improve efficiency and enhance perceptual integration.
@article{stecker2016perceptual,
author={stecker, g. christopher and diedesch, anna},
journal={journal of the audio engineering society},
title={perceptual weighting of binaural information: toward an auditory perceptual "spatial codec" for auditory augmented reality},
year={2016},
volume={},
number={},
pages={},
doi={},
month={september},}
@article{stecker2016perceptual,
author={stecker, g. christopher and diedesch, anna},
journal={journal of the audio engineering society},
title={perceptual weighting of binaural information: toward an auditory perceptual "spatial codec" for auditory augmented reality},
year={2016},
volume={},
number={},
pages={},
doi={},
month={september},
abstract={auditory augmented reality (ar) requires accurate estimation of spatial information conveyed in the natural scene, coupled with accurate spatial synthesis of virtual sounds to be integrated within it. solutions to both problems should consider the capabilities and limitations of the human binaural system, in order to maximize relevant over distracting acoustic information and enhance perceptual integration across ar layers. recent studies have measured how human listeners integrate spatial information across multiple conflicting cues, revealing patterns of “perceptual weighting” that sample the auditory scene in a robust but spectrotemporally sparse manner. such patterns can be exploited for binaural analysis and synthesis, much as time-frequency masking patterns are exploited by perceptual audio codecs, to improve efficiency and enhance perceptual integration.},}
TY - paper
TI - Perceptual Weighting of Binaural Information: Toward an Auditory Perceptual "Spatial Codec" for Auditory Augmented Reality
SP -
EP -
AU - Stecker, G. Christopher
AU - Diedesch, Anna
PY - 2016
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - September 2016
TY - paper
TI - Perceptual Weighting of Binaural Information: Toward an Auditory Perceptual "Spatial Codec" for Auditory Augmented Reality
SP -
EP -
AU - Stecker, G. Christopher
AU - Diedesch, Anna
PY - 2016
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - September 2016
AB - Auditory augmented reality (AR) requires accurate estimation of spatial information conveyed in the natural scene, coupled with accurate spatial synthesis of virtual sounds to be integrated within it. Solutions to both problems should consider the capabilities and limitations of the human binaural system, in order to maximize relevant over distracting acoustic information and enhance perceptual integration across AR layers. Recent studies have measured how human listeners integrate spatial information across multiple conflicting cues, revealing patterns of “perceptual weighting” that sample the auditory scene in a robust but spectrotemporally sparse manner. Such patterns can be exploited for binaural analysis and synthesis, much as time-frequency masking patterns are exploited by perceptual audio codecs, to improve efficiency and enhance perceptual integration.
Auditory augmented reality (AR) requires accurate estimation of spatial information conveyed in the natural scene, coupled with accurate spatial synthesis of virtual sounds to be integrated within it. Solutions to both problems should consider the capabilities and limitations of the human binaural system, in order to maximize relevant over distracting acoustic information and enhance perceptual integration across AR layers. Recent studies have measured how human listeners integrate spatial information across multiple conflicting cues, revealing patterns of “perceptual weighting” that sample the auditory scene in a robust but spectrotemporally sparse manner. Such patterns can be exploited for binaural analysis and synthesis, much as time-frequency masking patterns are exploited by perceptual audio codecs, to improve efficiency and enhance perceptual integration.
Authors:
Stecker, G. Christopher; Diedesch, Anna
Affiliation:
Vanderbilt University School of Medicine, Nashville, TN, USA
AES Conference:
2016 AES International Conference on Audio for Virtual and Augmented Reality (September 2016)
Paper Number:
6-2
Publication Date:
September 21, 2016Import into BibTeX
Subject:
Perceptual Consideration for VR/AR
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=18504