Last Updated: 20050411, meiTuesday, May 31, 09:30 — 13:00
Chair: Søren Bech, Bang & Olufsen - Struer, Denmark
Q-1 OPAQUE—A Tool for the Elicitation and Grading of Audio Quality Attributes—Jan Berg, Luleå University of Technology - Piteå, Sweden
The evaluation of different aspects of audio quality can be realized by means of attribute scales. Studies have shown that the attributes selected are of great importance for the evaluation result. Consequently, the process whereby these attributes are generated has to be given careful consideration. It was previously shown that elements from the repertory grid technique facilitated the elicitation and grading of quality attributes, which resulted in a new audio quality evaluation method. The result from this work has now been implemented as a software prototype aimed to support listening tests. This paper reports on the results from a pilot experiment involving the OPAQUE software.
Convention Paper 6480 (Purchase now)
Q-2 Communicating Listeners’ Auditory Spatial Experiences: A Method for Developing a Descriptive Language—Natanya Ford, Francis Rumsey, University of Surrey - Guildford, Surrey, UK; Tim Nind, Harman/Becker Automotive Systems - Bridgend, UK
A method is presented that details how a descriptive language can be developed for effectively communicating listeners’ individual auditory spatial experiences during subjective evaluations. The language-development method focuses on identifying and minimizing ambiguities that could prevent the representation of listeners’ experiences or the researcher’s comprehension of these experiences when communicated. The development of a specific descriptive graphical language provides an example of the method in practice. Details of this particular language’s evolution are summarized: from the elicitation and clarification of listeners’ individual graphical descriptors, to the development and evaluation of a communal language. Ambiguities encountered at the various stages in this language’s development are illustrated in a descriptive process model.
Convention Paper 6481 (Purchase now)
Q-3 Multistimulus Ranking versus Pairwise Comparison in Assessing Quality of Musical Instrument Sounds—Luiza Budzynska, Jacek Jelonek, Ewa Lukasik, Roman Slowinski, Robert Susmaga, Poznan University of Technology - Poznan, Poland
The paper compares the process and the results of two different methods for ranking musical instruments. A dedicated software tool enables the presentation of recorded sounds to the expert who makes her/his assessment according to particular criteria using a multistimulus test in a scale from 1 to n and a pairwise comparison followed by the Net Flow Scoring method. Several aspects of the ranking process can be analyzed, e.g., consistency of the results for the two methods, stability of rankings over time as well as an assessment of cognitive effort from the expert in each ranking method. In comparing the resulting rankings some statistical measures, such as the Kendall’s coefficient and Blest coefficient are used. Results show that the multistimulus test appeared to be faster to perform but less distinctive than pairwise comparison and demanding more cognitive effort.
Convention Paper 6482 (Purchase now)
Q-4 Selecting Participants for Listening Tests of Multichannel Reproduced Sound—Florian Wickelmaier, Aalborg University - Aalborg, Denmark; Sylvain Choisel, Aalborg University - Aalborg, Denmark, and Bang & Olufsen A/S, Struer, Denmark
A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multichannel reproduced sound. Ninety-one participants filled in a web-based questionnaire. Seventy-eight of them took part in an assessment of their hearing thresholds, their spatial hearing, and their verbal production abilities. The listeners displayed large individual differences in their performance. Forty subjects were selected based on the test results. The self-assessed listening habits and experience in the web-questionnaire could not predict the results of the selection procedure. Further, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel.
Convention Paper 6483 (Purchase now)
Q-5 The Importance of Phase in the Presence of Sound Source Direction—Koray Ozcan, Nokia (UK) Limited - Farnborough, UK
The multiple frequency wideband signals were processed using individual HRTFs to adjust the sound source direction. Main localization cues were tested versus direction in conflict. The method previously presented through the use of the Hilbert transform enabled phase and time to become independent from each other even for wideband signals. It was therefore possible to put phase in conflict with HRTF while leaving the amplitude characteristics unchanged for stereo signals. The results show that phase is less significant than time in the presence of direction. Furthermore, the central diffuse sound field that occurs when intensity and time are in conflict reduces the localization performance is not present when the direction and either intensity, time or phase are placed in conflict.
Convention Paper 6484 (Purchase now)
Q-6 Reproduction of Auditorium Spatial Impression with Binaural and Stereophonic Sound Systems—Paolo Martignon, Andrea Azzali, University of Parma - Parma, Italy; Densil Cabrera, University of Sydney - Sydney, Australia; Andrea Capra, Angelo Farina, University of Parma - Parma, Italy
Binaural room impulse responses convolved with anechoic recordings are commonly used in auditorium acoustics design and research. Binaural and stereophonic(ORTF) room impulse responses, which had been recorded in five-concert auditoria, were used in this study to test the spatial audio quality of four reproduction systems: conventional stereophony, binaural headphones, stereo dipole, and double stereo dipole. Anechoic music, convolved with the impulse responses, was reproduced over these systems. The systems were matched as closely as possible to each other and to the sound levels that would occur in the auditoria for the musical source. In a subjective test, subjects rated the room size, sound source distance and realism of the reproduction. Results indicate best spatial reproduction from the stereo dipole systems.
Convention Paper 6485 (Purchase now)
Q-7 Predicting Timbral Variation for Sharpness-Matched Guitar Tones Resulting from Distortion-Based Effects Processing—Atsushi Marui, William Martens, McGill University - Montreal, Quebec, Canada
In order to develop a model for predicting the timbral variation of guitar tones resulting from multiparameter distortion-based effects processing, physical measures of guitar signals were related to perceptual and semantic data collected from a group of young adults. Stimuli were generated using three distortion processes and were subsequently adjusted to yield three values of Zwicker Sharpness (ZS). Presented using pairwise comparisons, 63 listeners made perceptual dissimilarity ratings of nine uniquely processed versions of a single guitar performance. A stimulus space comprised of the three most salient differing dimensions of the guitar timbres was consequently derived. Additionally, 57 listeners made direct ratings on 11 bipolar adjective scales for the same 9 stimuli in order to aid in the interpretation of the stimulus space. Coordinates in the first dimension of stimulus space were discovered to be related to the ZS values computed for the physical signals, corresponding to the perceptual attribute termed auditory sharpness. The second and third dimensions were discovered to be products of the three distinct distortion processes employed in stimulus generation. These last two dimensions were found to be predictable from measures of spectral features, features that remained after removing spectral tilt related to the ZS of the stimuli. A model is subsequently presented that can be used to predict timbral variation along perceptual dimensions of guitar distortion beyond auditory sharpness.
Convention Paper 6486 (Purchase now)
©2005 Audio Engineering Society, Inc.