A Model of Distraction in an Audio-on-Audio Interference Situation with Music Program Material
×
Cite This
Citation & Abstract
J. Francombe, R. Mason, M. Dewhirst, and S. Bech, "A Model of Distraction in an Audio-on-Audio Interference Situation with Music Program Material," J. Audio Eng. Soc., vol. 63, no. 1/2, pp. 63-77, (2015 January.). doi: https://doi.org/10.17743/jaes.2015.0006
J. Francombe, R. Mason, M. Dewhirst, and S. Bech, "A Model of Distraction in an Audio-on-Audio Interference Situation with Music Program Material," J. Audio Eng. Soc., vol. 63 Issue 1/2 pp. 63-77, (2015 January.). doi: https://doi.org/10.17743/jaes.2015.0006
Abstract: There are many situations in which multiple audio programs are replayed over loudspeakers in the same acoustic environment, allowing listeners to focus on their desired target program. Where this situation is deliberately created and the different program items are centrally controlled, each listener can be viewed as having a personal sound zone system. In order to evaluate and optimize such situations in a perceptually relevant manner, the authors created a predictive model using the features that contribute to the distraction from unwanted sounds. Feature extraction was motivated by a qualitative analysis of subject responses. Distraction ratings were collected for one hundred randomly created audio-on-audio interference situations with music target and interferer programs. The selected features were related to the overall loudness, loudness ratio, perceptual evaluation of audio source separation, and frequency content of the interferer. The model was found to predict accurately for the training and validation datasets.
@article{francombe2015a,
author={francombe, jon and mason, russell and dewhirst, martin and bech, søren},
journal={journal of the audio engineering society},
title={a model of distraction in an audio-on-audio interference situation with music program material},
year={2015},
volume={63},
number={1/2},
pages={63-77},
doi={https://doi.org/10.17743/jaes.2015.0006},
month={january},}
@article{francombe2015a,
author={francombe, jon and mason, russell and dewhirst, martin and bech, søren},
journal={journal of the audio engineering society},
title={a model of distraction in an audio-on-audio interference situation with music program material},
year={2015},
volume={63},
number={1/2},
pages={63-77},
doi={https://doi.org/10.17743/jaes.2015.0006},
month={january},
abstract={there are many situations in which multiple audio programs are replayed over loudspeakers in the same acoustic environment, allowing listeners to focus on their desired target program. where this situation is deliberately created and the different program items are centrally controlled, each listener can be viewed as having a personal sound zone system. in order to evaluate and optimize such situations in a perceptually relevant manner, the authors created a predictive model using the features that contribute to the distraction from unwanted sounds. feature extraction was motivated by a qualitative analysis of subject responses. distraction ratings were collected for one hundred randomly created audio-on-audio interference situations with music target and interferer programs. the selected features were related to the overall loudness, loudness ratio, perceptual evaluation of audio source separation, and frequency content of the interferer. the model was found to predict accurately for the training and validation datasets.},}
TY - paper
TI - A Model of Distraction in an Audio-on-Audio Interference Situation with Music Program Material
SP - 63
EP - 77
AU - Francombe, Jon
AU - Mason, Russell
AU - Dewhirst, Martin
AU - Bech, Søren
PY - 2015
JO - Journal of the Audio Engineering Society
IS - 1/2
VO - 63
VL - 63
Y1 - January 2015
TY - paper
TI - A Model of Distraction in an Audio-on-Audio Interference Situation with Music Program Material
SP - 63
EP - 77
AU - Francombe, Jon
AU - Mason, Russell
AU - Dewhirst, Martin
AU - Bech, Søren
PY - 2015
JO - Journal of the Audio Engineering Society
IS - 1/2
VO - 63
VL - 63
Y1 - January 2015
AB - There are many situations in which multiple audio programs are replayed over loudspeakers in the same acoustic environment, allowing listeners to focus on their desired target program. Where this situation is deliberately created and the different program items are centrally controlled, each listener can be viewed as having a personal sound zone system. In order to evaluate and optimize such situations in a perceptually relevant manner, the authors created a predictive model using the features that contribute to the distraction from unwanted sounds. Feature extraction was motivated by a qualitative analysis of subject responses. Distraction ratings were collected for one hundred randomly created audio-on-audio interference situations with music target and interferer programs. The selected features were related to the overall loudness, loudness ratio, perceptual evaluation of audio source separation, and frequency content of the interferer. The model was found to predict accurately for the training and validation datasets.
There are many situations in which multiple audio programs are replayed over loudspeakers in the same acoustic environment, allowing listeners to focus on their desired target program. Where this situation is deliberately created and the different program items are centrally controlled, each listener can be viewed as having a personal sound zone system. In order to evaluate and optimize such situations in a perceptually relevant manner, the authors created a predictive model using the features that contribute to the distraction from unwanted sounds. Feature extraction was motivated by a qualitative analysis of subject responses. Distraction ratings were collected for one hundred randomly created audio-on-audio interference situations with music target and interferer programs. The selected features were related to the overall loudness, loudness ratio, perceptual evaluation of audio source separation, and frequency content of the interferer. The model was found to predict accurately for the training and validation datasets.
Authors:
Francombe, Jon; Mason, Russell; Dewhirst, Martin; Bech, Søren
Affiliations:
Institute of Sound Recording, University of Surrey, Guildford, UK; Bang & Olufsen a/s, Struer, Denmark; Section of Signal and Information Processing, Department of Electronic Systems, Aalborg University, Aalborg, Denmark(See document for exact affiliation information.) JAES Volume 63 Issue 1/2 pp. 63-77; January 2015
Publication Date:
February 10, 2015Import into BibTeX
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=17567