Speech-To-Screen: Spatial Separation of Dialogue from Noise towards Improved Speech Intelligibility for the Small Screen
×
Cite This
Citation & Abstract
P. Demonte, Y. Tang, RI. J.. Hughes, T. Cox, B. Fazenda, and B. Shirley, "Speech-To-Screen: Spatial Separation of Dialogue from Noise towards Improved Speech Intelligibility for the Small Screen," Paper 10011, (2018 May.). doi:
P. Demonte, Y. Tang, RI. J.. Hughes, T. Cox, B. Fazenda, and B. Shirley, "Speech-To-Screen: Spatial Separation of Dialogue from Noise towards Improved Speech Intelligibility for the Small Screen," Paper 10011, (2018 May.). doi:
Abstract: Can externalizing dialogue when in the presence of stereo background noise improve speech intelligibility? This has been investigated for audio over headphones using head-tracking in order to explore potential future developments for small-screen devices. A quantitative listening experiment tasked participants with identifying target words in spoken sentences played in the presence of background noise via headphones. Sixteen different combinations of 3 independent variables were tested: speech and noise locations (internalized/externalized), video (on/off), and masking noise (stationary/fluctuating noise). The results revealed that the best improvements to speech intelligibility were generated by both the video-on condition and externalizing speech at the screen while retaining masking noise in the stereo mix.
@article{demonte2018speech-to-screen:,
author={demonte, philippa and tang, yan and hughes, richard j. and cox, trevor and fazenda, bruno and shirley, ben},
journal={journal of the audio engineering society},
title={speech-to-screen: spatial separation of dialogue from noise towards improved speech intelligibility for the small screen},
year={2018},
volume={},
number={},
pages={},
doi={},
month={may},}
@article{demonte2018speech-to-screen:,
author={demonte, philippa and tang, yan and hughes, richard j. and cox, trevor and fazenda, bruno and shirley, ben},
journal={journal of the audio engineering society},
title={speech-to-screen: spatial separation of dialogue from noise towards improved speech intelligibility for the small screen},
year={2018},
volume={},
number={},
pages={},
doi={},
month={may},
abstract={can externalizing dialogue when in the presence of stereo background noise improve speech intelligibility? this has been investigated for audio over headphones using head-tracking in order to explore potential future developments for small-screen devices. a quantitative listening experiment tasked participants with identifying target words in spoken sentences played in the presence of background noise via headphones. sixteen different combinations of 3 independent variables were tested: speech and noise locations (internalized/externalized), video (on/off), and masking noise (stationary/fluctuating noise). the results revealed that the best improvements to speech intelligibility were generated by both the video-on condition and externalizing speech at the screen while retaining masking noise in the stereo mix.},}
TY - paper
TI - Speech-To-Screen: Spatial Separation of Dialogue from Noise towards Improved Speech Intelligibility for the Small Screen
SP -
EP -
AU - Demonte, Philippa
AU - Tang, Yan
AU - Hughes, Richard J.
AU - Cox, Trevor
AU - Fazenda, Bruno
AU - Shirley, Ben
PY - 2018
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2018
TY - paper
TI - Speech-To-Screen: Spatial Separation of Dialogue from Noise towards Improved Speech Intelligibility for the Small Screen
SP -
EP -
AU - Demonte, Philippa
AU - Tang, Yan
AU - Hughes, Richard J.
AU - Cox, Trevor
AU - Fazenda, Bruno
AU - Shirley, Ben
PY - 2018
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2018
AB - Can externalizing dialogue when in the presence of stereo background noise improve speech intelligibility? This has been investigated for audio over headphones using head-tracking in order to explore potential future developments for small-screen devices. A quantitative listening experiment tasked participants with identifying target words in spoken sentences played in the presence of background noise via headphones. Sixteen different combinations of 3 independent variables were tested: speech and noise locations (internalized/externalized), video (on/off), and masking noise (stationary/fluctuating noise). The results revealed that the best improvements to speech intelligibility were generated by both the video-on condition and externalizing speech at the screen while retaining masking noise in the stereo mix.
Can externalizing dialogue when in the presence of stereo background noise improve speech intelligibility? This has been investigated for audio over headphones using head-tracking in order to explore potential future developments for small-screen devices. A quantitative listening experiment tasked participants with identifying target words in spoken sentences played in the presence of background noise via headphones. Sixteen different combinations of 3 independent variables were tested: speech and noise locations (internalized/externalized), video (on/off), and masking noise (stationary/fluctuating noise). The results revealed that the best improvements to speech intelligibility were generated by both the video-on condition and externalizing speech at the screen while retaining masking noise in the stereo mix.
Open Access
Authors:
Demonte, Philippa; Tang, Yan; Hughes, Richard J.; Cox, Trevor; Fazenda, Bruno; Shirley, Ben
Affiliation:
University of Salford, Salford, Greater Manchester, UK
AES Convention:
144 (May 2018)
Paper Number:
10011
Publication Date:
May 14, 2018Import into BibTeX
Subject:
Perception – Part 3
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=19407