Last Updated: 20060822, mei
P21 - Posters: Psychoacoustics and Perception
Sunday, October 8, 9:30 am — 11:00 am
P21-1 Perceptual Importance of the Number of Loudspeakers for Reproducing the Late Part of a Room Impulse Response—Audun Solvang, U. Peter Svensson, Norwegian University of Science and Technology - Trondheim, Norway
A sound field generated by 16 loudspeakers in the horizontal plane was used as reference, and the impairment introduced by using 8, 4, and 3 loudspeakers for reproducing the late part of the room impulse response was investigated using listening tests. Stimuli were synthesized from repetitive octave-band wide pulses that were convolved with room impulse responses, and tempo as well as octave-band center frequencies were varied. Results show generally a barely perceptible impairment. Increasing the tempo led to a larger impairment for all loudspeaker configurations and frequencies. The impairment depended on the number of loudspeakers at 8 kHz but not at 250 Hz or 1 kHz. The reverberation in the listening room, 0.12 - 0.20 s, might have masked fluctuations in interaural time differences that are the dominating cue for 250 Hz and 1 kHz. The reverberation time was, however, so short that it hardly influenced fluctuations in the interaural level differences, the dominating cue at 8 kHz.
Convention Paper 6975 (Purchase now)
P21-2 A System for Adapting Broadcast Sound to the Aural Characteristics of Elderly Listeners—Tomoyasu Komori, Tohru Takagi, NHK Science & Technical Research Laboratories - Setagaya-ku, Tokyo, Japan
This paper describes an adaptive sound reproduction system for elderly listeners. We developed new audiometric equipment to gauge MAF (Minimum Audible Field) of listeners in the range from 125 Hz to 1 6kHz. We found the average MAF by age for people from their twenties to eighties and investigated ways to adapt speech signals for elderly listeners based on their aural characteristics. The system adjusts the speech signal energy with reference to the partitioned frequency band below the average MAF. We have broadcasted the pilot programs by using a simple method the speech mixed with the BGM (Back Ground Music) that is only reduced by 6 dB from the original level. We confirmed that our proposed method is preferable to the simple method.
Convention Paper 6976 (Purchase now)
P21-3 A Comparison between Spatial Audio Listener Training and Repetitive Practice—Rafael Kassier, Tim Brookes, Francis Rumsey, University of Surrey - Guildford, Surrey, UK
Despite the existence of various timbral ear training systems, relatively little work has been carried out into listener training for spatial audio. Additionally, listener training in published studies has tended to extend only to repetitive practice without feedback. In order for a generalized training system for spatial audio listening skills to prove effective, it must demonstrate that learned skills are transferable away from the training environment, and it must compare favorably with repetitive practice on specific tasks. A novel study has been conducted to compare a generalized training system with repetitive practice on performance in spatial audio evaluation tasks. Transfer is assessed and practice and training are compared against a control group for tasks involving both near and far transfer.
Convention Paper 6977 (Purchase now)
P21-4 Quantified Total Consonance as an Assessment Parameter for the Sound Quality—Sang Bae Chon, In Yong Choi, Mingu Lee, Koeng-Mo Sung, Seoul National University - Seoul, Korea
There have been many attempts to quantify consonance. This paper introduces a more efficient and systematic algorithm for consonance quantification than the conventional definitions that were used in the past. We also verify that the quantified consonance can be treated as an additional psychoacoustical parameter to evaluate the sound quality of a certain noise-like sound from dual horns of a vehicle.
Convention Paper 6978 (Purchase now)
P21-5 Music Genre Categorization in Humans and Machines—Enric Guaus, Perfecto Herrera, High Music School of Catalonia - Barcelona, Spain, and Universitat Pompeu Fabra, Barcelona, Spain
Music genre classification is one of the most active tasks in music information retrieval (MIR). Many successful approaches can be found in literature. Most of them are based on machine learning algorithms applied to different audio features automatically computed for a specific database. But there is no computational model that explains how musical features are combined in order to yield genre decision in humans. In this paper we present a series of listening experiments where audio has been altered in order to preserve some properties of music (rhythm, harmony, etc.) but at the same time degrading other ones. Results will be compared with a series of state-of-the-art genre classifiers based on these musical properties. We draw some lessons from comparing them.
Convention Paper 6979 (Purchase now)