In This Section
Synchronized Swept-Sine: Theory, Application, and Implementation - October 2015
Effect of Microphone Number and Positioning on the Average of Frequency Responses in Cinema Calibration - October 2015
The Measurement and Calibration of Sound Reproducing Systems - July 2015
Journal of the AES
2012 November - Volume 60 Number 11
Perceptual Evaluation of Model- and Signal-Based Predictors of the Mixing Time in Binaural Room Impulse Responses
When creating virtual acoustic environments, the computational demands can be reduced by using generic late reverberation. Beyond the “mixing time,” the diffuse reverberation no longer contains details of the specific location. Therefore, a perceptually validated model for predicting the mixing time of different spaces will be helpful. This study evaluates various predictors of the perceptual mixing time using 9 different spaces. Both model- and signal-based estimators of mixing time were examined for their ability to predict the results of a group of expert listeners. For a shoebox-shaped room, the average perceptual mixing time can be predicted by the enclosure’s ratio of volume over surface area V/S and by vV, which serve as indicators of the mean free path length and the reflection density, respectively. Moreover, the “echo density profile” by Abel and Huang (AES paper 6985) can be used to predict the perceptual mixing time from measured data.
Discrimination of Musical Instrument Tones Resynthesized with Piecewise-Linear Approximated Harmonic Amplitude Envelopes
Quasi-harmonic musical instrument tones can be synthesized with various additive methods, but this approach requires a large number of parameters to describe the amplitude and frequency envelopes. Experienced users find it difficult to meaningfully manipulate so many parameters. A piecewise linear approximation with breakpoints reduces the data complexity. This study explores the perceptual implications of choosing the density of piecewise segments. Using a two-alternative forced-choice paradigm, listeners judged if the approximation was distinguishable from the original. Relative-amplitude spectral error and relative-amplitude critical-band error were found to be the best error metrics for predicting discrimination, accounting for about 80% of the discrimination variance. Strong correlations were observed between discrimination scores and the modified spectral incoherence based on the three strongest harmonics. Breath noise in the flute and bow noise in the violin appeared to cause increased discrimination issues.
Binary pseudorandom sequences are widely used for audio testing, such as a test signal for impulse response measurements of loudspeakers. These sequences are simple to generate, and there are efficient algorithms for computing the cyclic crosscorrelation. Pseudorandom sequences can be categorized as members of so-called families of correlation sequences. Two families (maximum- length and Kasami sequences) were looked at as test signals for examining linear and nonlinear system characteristics. In one application, these signals are used to derive the frequency response of a weakly nonlinear system, and in the second application the sequences are used to measure the nonlinearity. The results were then compared to those obtained from conventional methods.
Over the last few decades there have been numerous explorations of sonification, a concept that may be loosely defined as communicating nonaudio information as sound. As with any developing field, there comes a time when a formal structure is needed to provide a framework for understanding the collection of ad hoc experiments. To make the mapping between data and sound more explicit and less prone to misunderstandings, a sonification operator has been suggested. The authors created “notation modules” to formulate this mapping for various fields. An example of a specific sonification operator in the field of physics is given. Nine subjects from research were used in a study to evaluate the experience of this formalism.
User interfaces for searching and browsing collections of music often use nonaudio for presenting information about the contents of the collection. This study reviews the literature to unify the various ways in which auditory spatialization can be used to augment the presentation of data. The authors examined 22 user interfaces that use such concepts as auditory icons, perceived location, amplitude panning, and a usability evaluation. Commonalities among the designs are discussed including the chosen spatialization approaches and evaluation methods.
Standards and Information Documents
AES Standards Committee News
Audio applications of networks; MADI; digital audio measurements; digital audio interfaces; audio-over-IP interoperability; open control architecture
48th Conference Report, Munich
49th Conference Preview, London
49th Conference Program
The ears of musicians and audio engineers are frequently subjected to sound exposure levels that can result in temporary or permanent hearing disorders. The AES 47th Conference provided a forum for exchange of the latest research information on prevention, diagnosis, and measurement. A broader understanding of the relevant factors was the gained by all concerned.
Call for Associate Technical Editors
51st Conference Call for Papers, Helsinki
Call for Nominations for the Board of Governors
Call for Awards Nominations