AES New York 2017
Paper Session P07
P07 - Perception—Part 2
Thursday, October 19, 9:00 am — 11:00 am (Rm 1E12)
Dan Mapes-Riordan, Etymotic Research - Elk Grove Village, IL, USA
P07-1 A Statistical Model that Predicts Listeners’ Preference Ratings of In-Ear Headphones: Part 1—Listening Test Results and Acoustic Measurements—Sean Olive, Harman International - Northridge, CA, USA; Todd Welti, Harman International Inc. - Northridge, CA, USA; Omid Khonsaripour, Harman International - Northridge, CA, USA
A series of controlled listening tests were conducted on 30 different models of in-ear (IE) headphones to measure their relative sound quality. A total of 71 listeners both trained and untrained rated the headphones on a 100-point preference scale using a multiple stimulus method with a hidden reference and low anchor. A virtual headphone test method was used wherein each headphone was simulated over a high-quality replicator headphone equalized to match their measured magnitude response. Leakage was monitored and eliminated for each subject. The results revealed both trained and untrained listeners preferred the hidden reference, which was the replicator headphone equalized to our new IE headphone target response curve. The further the other headphones deviated from the target response, the less they were preferred. Part two of this paper develops a statistical model that predicts the headphone preference ratings based on their acoustic measurements.
Convention Paper 9840 (Purchase now)
P07-2 Perceptual Assessment of Headphone Distortion—Louis Fielder, Dolby - San Francisco, CA, USA
A perceptually-driven distortion metric for headphones is proposed that is based on a critical-band spectral comparison of the distortion and noise to an appropriate masked threshold, when the headphone is excited by a sine wave signal. Additionally, new headphone-based masking curves for 20, 50, 100, 200, 315, 400, and 500 Hz sine waves are derived by subjective tests using bands of narrow-band noise being masked by a sine wave signal. The ratios of measured distortion and noise levels in critical bands over the appropriate masking curve values are compared, with the critical bands starting at the second harmonic. Once this is done the audibility of all these contributions are combined into a single audibility value. Extension to loudspeaker measurements is briefly discussed.
Convention Paper 9841 (Purchase now)
P07-3 The Adjustment / Satisfaction Test (A/ST) for the Subjective Evaluation of Dialogue Enhancement—Matteo Torcoli, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Jürgen Herre, International Audio Laboratories Erlangen - Erlangen, Germany; Fraunhofer IIS - Erlangen, Germany; Jouni Paulus, Fraunhofer IIS - Erlangen, Germany; International Audio Laboratories Erlangen - Erlangen, Germany; Christian Uhle, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; International Audio Laboratories Erlangen - Erlangen, Germany; Harald Fuchs, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Oliver Hellmuth, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Media consumption is heading towards high degrees of content personalization. It is thus crucial to assess the perceptual performance of personalized media delivery. This work proposes the Adjustment/Satisfaction Test (A/ST), a perceptual test where subjects interact with a user-adjustable system and their adjustment preferences and the resulting satisfaction levels are studied. We employ the A/ST to evaluate an object-based audio system that enables the personalization of the balance between dialogue and background, i.e., a Dialogue Enhancement system. Both the case in which the original audio objects are readily available and the case in which they are estimated by blind source separation are compared. Personalization is extensively used, resulting in clearly increased satisfaction, even in the case with blind source separation.
Convention Paper 9842 (Purchase now)
P07-4 Automatic Text Clustering for Audio Attribute Elicitation Experiment Responses—Jon Francombe, University of Surrey - Guildford, Surrey, UK; Tim Brookes, University of Surrey - Guildford, Surrey, UK; Russell Mason, University of Surrey - Guildford, Surrey, UK
Collection of text data is an integral part of descriptive analysis, a method commonly used in audio quality evaluation experiments. Where large text data sets will be presented to a panel of human assessors (e.g., to group responses that have the same meaning), it is desirable to reduce redundancy as much as possible in advance. Text clustering algorithms have been used to achieve such a reduction. A text clustering algorithm was tested on a dataset for which manual annotation by two experts was also collected. The comparison between the manual annotations and automatically-generated clusters enabled evaluation of the algorithm. While the algorithm could not match human performance, it could produce a similar grouping with a significant redundancy reduction (approximately 48%).
Convention Paper 9843 (Purchase now)