Authors:Lindau, Alexander; Kosanke, Linda; Weinzierl, Stefan
Affiliation:Audio Communication Group, TU Berlin, Germany
When creating virtual acoustic environments, the computational demands can be reduced by using generic late reverberation. Beyond the “mixing time,” the diffuse reverberation no longer contains details of the specific location. Therefore, a perceptually validated model for predicting the mixing time of different spaces will be helpful. This study evaluates various predictors of the perceptual mixing time using 9 different spaces. Both model- and signal-based estimators of mixing time were examined for their ability to predict the results of a group of expert listeners. For a shoebox-shaped room, the average perceptual mixing time can be predicted by the enclosure’s ratio of volume over surface area V/S and by vV, which serve as indicators of the mean free path length and the reflection density, respectively. Moreover, the “echo density profile” by Abel and Huang (AES paper 6985) can be used to predict the perceptual mixing time from measured data.
Download: PDF (HIGH Res) (2.9MB)
Download: PDF (LOW Res) (412KB)
Authors:Lee, Chung; Horner, Andrew; Beauchamp, James
Affiliation:Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong; School and Music and Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
Quasi-harmonic musical instrument tones can be synthesized with various additive methods, but this approach requires a large number of parameters to describe the amplitude and frequency envelopes. Experienced users find it difficult to meaningfully manipulate so many parameters. A piecewise linear approximation with breakpoints reduces the data complexity. This study explores the perceptual implications of choosing the density of piecewise segments. Using a two-alternative forced-choice paradigm, listeners judged if the approximation was distinguishable from the original. Relative-amplitude spectral error and relative-amplitude critical-band error were found to be the best error metrics for predicting discrimination, accounting for about 80% of the discrimination variance. Strong correlations were observed between discrimination scores and the modified spectral incoherence based on the three strongest harmonics. Breath noise in the flute and bow noise in the violin appeared to cause increased discrimination issues.
Download: PDF (HIGH Res) (1.3MB)
Download: PDF (LOW Res) (360KB)
Authors:Herzog, Stephan; Potchinkov, Alexander
Affiliation:Department of Electrical & Computer Engineering, University of Kaiserslautern, Kaiserslautern, Germany
Binary pseudorandom sequences are widely used for audio testing, such as a test signal for impulse response measurements of loudspeakers. These sequences are simple to generate, and there are efficient algorithms for computing the cyclic crosscorrelation. Pseudorandom sequences can be categorized as members of so-called families of correlation sequences. Two families (maximum- length and Kasami sequences) were looked at as test signals for examining linear and nonlinear system characteristics. In one application, these signals are used to derive the frequency response of a weakly nonlinear system, and in the second application the sequences are used to measure the nonlinearity. The results were then compared to those obtained from conventional methods.
Download: PDF (HIGH Res) (1.6MB)
Download: PDF (LOW Res) (1.1MB)
Authors:Vogt, Katharina; Höldrich, Robert
Affiliation:University of Music and Performing Arts, Graz, Austria
Over the last few decades there have been numerous explorations of sonification, a concept that may be loosely defined as communicating nonaudio information as sound. As with any developing field, there comes a time when a formal structure is needed to provide a framework for understanding the collection of ad hoc experiments. To make the mapping between data and sound more explicit and less prone to misunderstandings, a sonification operator has been suggested. The authors created “notation modules” to formulate this mapping for various fields. An example of a specific sonification operator in the field of physics is given. Nine subjects from research were used in a study to evaluate the experience of this formalism.
Download: PDF (HIGH Res) (524KB)
Download: PDF (LOW Res) (239KB)
Authors:Stewart, Rebecca; Sandler, Mark
Affiliation:Queen Mary University of London, London, UK
User interfaces for searching and browsing collections of music often use nonaudio for presenting information about the contents of the collection. This study reviews the literature to unify the various ways in which auditory spatialization can be used to augment the presentation of data. The authors examined 22 user interfaces that use such concepts as auditory icons, perceived location, amplitude panning, and a usability evaluation. Commonalities among the designs are discussed including the chosen spatialization approaches and evaluation methods.
Download: PDF (HIGH Res) (1.7MB)
Download: PDF (LOW Res) (258KB)
The ears of musicians and audio engineers are frequently subjected to sound exposure levels that can result in temporary or permanent hearing disorders. The AES 47th Conference provided a forum for exchange of the latest research information on prevention, diagnosis, and measurement. A broader understanding of the relevant factors was the gained by all concerned.
Download: PDF (470KB)