Synthetic Transaural Audio Rendering (STAR): A Perceptive 3D Audio Spatialization Method
×
Cite This
Citation & Abstract
E. Meaux, and S. Marchand, "Synthetic Transaural Audio Rendering (STAR): A Perceptive 3D Audio Spatialization Method," J. Audio Eng. Soc., vol. 69, no. 7/8, pp. 497-505, (2021 July.). doi: https://doi.org/10.17743/jaes.2021.0015
E. Meaux, and S. Marchand, "Synthetic Transaural Audio Rendering (STAR): A Perceptive 3D Audio Spatialization Method," J. Audio Eng. Soc., vol. 69 Issue 7/8 pp. 497-505, (2021 July.). doi: https://doi.org/10.17743/jaes.2021.0015
Abstract: The synthetic transaural audio rendering (STAR) method aims at canceling the cross-talk signals between two loudspeakers and the ears of the listener (in a transaural way), with acoustic paths not measured but computed by some model (thus synthetic). Our model is based on perceptive cues used by the human auditory system for sound localization. The aim is to give the listener the sense of the position of each source rather than reconstruct the corresponding acoustic wave or field. Although the method currently focuses on the azimuth dimension, extensions to elevation and distance are now considered, for full 3D sound, with a discussion to conduct further works needed to improve overall quality and validate such extensions.
@article{meaux2021synthetic,
author={meaux, eric and marchand, sylvain},
journal={journal of the audio engineering society},
title={synthetic transaural audio rendering (star): a perceptive 3d audio spatialization method},
year={2021},
volume={69},
number={7/8},
pages={497-505},
doi={https://doi.org/10.17743/jaes.2021.0015},
month={july},}
@article{meaux2021synthetic,
author={meaux, eric and marchand, sylvain},
journal={journal of the audio engineering society},
title={synthetic transaural audio rendering (star): a perceptive 3d audio spatialization method},
year={2021},
volume={69},
number={7/8},
pages={497-505},
doi={https://doi.org/10.17743/jaes.2021.0015},
month={july},
abstract={the synthetic transaural audio rendering (star) method aims at canceling the cross-talk signals between two loudspeakers and the ears of the listener (in a transaural way), with acoustic paths not measured but computed by some model (thus synthetic). our model is based on perceptive cues used by the human auditory system for sound localization. the aim is to give the listener the sense of the position of each source rather than reconstruct the corresponding acoustic wave or field. although the method currently focuses on the azimuth dimension, extensions to elevation and distance are now considered, for full 3d sound, with a discussion to conduct further works needed to improve overall quality and validate such extensions.},}
TY - paper
TI - Synthetic Transaural Audio Rendering (STAR): A Perceptive 3D Audio Spatialization Method
SP - 497
EP - 505
AU - Meaux, Eric
AU - Marchand, Sylvain
PY - 2021
JO - Journal of the Audio Engineering Society
IS - 7/8
VO - 69
VL - 69
Y1 - July 2021
TY - paper
TI - Synthetic Transaural Audio Rendering (STAR): A Perceptive 3D Audio Spatialization Method
SP - 497
EP - 505
AU - Meaux, Eric
AU - Marchand, Sylvain
PY - 2021
JO - Journal of the Audio Engineering Society
IS - 7/8
VO - 69
VL - 69
Y1 - July 2021
AB - The synthetic transaural audio rendering (STAR) method aims at canceling the cross-talk signals between two loudspeakers and the ears of the listener (in a transaural way), with acoustic paths not measured but computed by some model (thus synthetic). Our model is based on perceptive cues used by the human auditory system for sound localization. The aim is to give the listener the sense of the position of each source rather than reconstruct the corresponding acoustic wave or field. Although the method currently focuses on the azimuth dimension, extensions to elevation and distance are now considered, for full 3D sound, with a discussion to conduct further works needed to improve overall quality and validate such extensions.
The synthetic transaural audio rendering (STAR) method aims at canceling the cross-talk signals between two loudspeakers and the ears of the listener (in a transaural way), with acoustic paths not measured but computed by some model (thus synthetic). Our model is based on perceptive cues used by the human auditory system for sound localization. The aim is to give the listener the sense of the position of each source rather than reconstruct the corresponding acoustic wave or field. Although the method currently focuses on the azimuth dimension, extensions to elevation and distance are now considered, for full 3D sound, with a discussion to conduct further works needed to improve overall quality and validate such extensions.