In This Section
Journal of the AES
2010 January/February - Volume 58 Number 1/2
Filters based on head-related transfer functions (HRTFs) allow a monaural source to be localized in a three-dimensional virtual space. There are three ways to obtain HRTF impulse responses: (a) individually measured on each specific person, which requires extensive effort; (b) generic standardized responses, such as dummy heads, which are often inaccurate; and (c) derived responses based on anthropometric measurements of an individual’s head. This article proposes an approach for decomposing HRTFs that will facilitate the development of systems based on the third method. This method offers the potential to produce high accuracy without the extensive effort.
Reproducing a pair of binaural signals over loudspeakers requires crosstalk cancellation filters that create sound at the two ears corresponding to a transparent delivery of the intended source material. Such filtering is effectively inverting the actual response of the loudspeakers to the two ears. The authors explore the consequences of inversion, especially when the response lasts longer than that of a strictly anechoic environment. The choice of inverse design parameters proves more difficult than expected. The authors conclude that the required knowledge of the actual environment is equivalent to making in situ measurements.
An Initial Validation of Individualized Crosstalk Cancellation Filters for Binaural Perceptual Experiments
Delivering binaural stimuli with loudspeakers through crosstalk filters avoids the intrinsic artifacts of using headphones in localization experiments. However, one must first demonstrate that such a system is equivalent to that of a real soundfield. This study demonstrates that listeners did not perceive any meaningful difference between a real sound source at 0 degrees and a virtual rendering using crosstalk cancellation from a pair of loudspeakers at ±90 degrees. Three different stimuli were used: single bursts of wideband noise, click trains, and repeated harmonic pulses. Listeners could not discriminate between the two cases using a forced-choice paradigm.
Because it was not possible to experiment with changes to the acoustic environment of the Hemicycle of the Spanish Congress while it was in session, a simulation model was created. Different methods were tested to improve the intelligibility after the simulated model was validated with measurements of the unoccupied space. From this work it was concluded that an overall redesign of the sound reinforcement system was required. Even though there were some 350 small loudspeakers for the deputies, they were often not at their seats. As a result, loudspeakers contributed to the background noise without enhancing intelligibility.
Correction to “Automatic Monitor Mixing for Live Musical Performance”
Standards and Information Documents
AES Standards Committee News
Digital audio interfaces; audio/video synchronization; recoding data set for bit-rate reduction; audio applications of D-type connectors; XLR polarity; measurement of digital audio equipment; networked audio control, monitoring, and connection management; audio metadata
14th Tokyo Regional Convention Report
14th Tokyo Regional Convention Exhibitors
Mastering is a process of “finishing” in audio production that aims to unify and improve the final quality of a project. In the age of analog audio it tended to involve optimizing the sound so as to overcome or accommodate the limitations of the delivery medium. With the advent of the digital age an apparently transparent delivery channel was offered, requiring mastering engineers to rethink some of their reasons for existence. Now, we live in a time of multiservice digital delivery methods that have a wide range of quality effects and limitations, giving rise once again to a challenge for mastering, albeit of a slightly different nature. A further factor in the equation is the inexorable rise of home-studio production, which makes new demands on the mastering process. These issues were discussed in a number of events at the AES 127th Convention by experts in the fields of studio mastering and processing for Internet streaming.
Audio history is examined to identify the seminal contributions to the digital audio revolution. Clues are discovered that could make it possible to discern the likely direction of digital audio engineering in the future.
Electric guitar tone, you know it’s right when you hear it. How is it achieved? The typical starting approach at the guitar amp: Shure SM57 microphone, slightly off center of one of the cones of a driver, up close and almost touching the grille cloth. Oh, and angle the microphone a little. Ask veteran engineers why this microphone placement strategy is so common and a range of justifications follows, from seemingly scientific explanations, to vague guesses, to an honest, “I have no idea. I’ve always done it that way. Everyone does.”
40th Conference, Tokyo, Call for Papers
129th Convention, San Francisco, Call for Papers