In This Section
Synchronized Swept-Sine: Theory, Application, and Implementation - October 2015
Effect of Microphone Number and Positioning on the Average of Frequency Responses in Cinema Calibration - October 2015
The Measurement and Calibration of Sound Reproducing Systems - July 2015
Journal of the AES
2011 November - Volume 59 Number 11
When size limitations of loudspeakers prevent the reproduction of low-frequency sounds, nonlinear devices can be used to create the illusion of the missing bass. Such virtual bass systems generate harmonics of the missing fundamental. However, they also can generate unwanted intermodulation distortion, which appears to be dependent on the particular audio sample and the selected nonlinearity. A detailed analysis showed that the ideal nonlinearity should not be even-symmetric, and its second derivative should be less than zero on the input interval 0 to 1.
Small rooms are characterized by low-frequency resonant modes, where traditional equalization systems may not efficiently correct the response at a few narrow frequency bands. A proposed supplementary approach based on virtual bass improves output efficiency while further reducing quality variations over a wide-area. The problematic low frequencies are reduced by the equalization system and are processed to create the illusion of the reduced bass by adding harmonics of those frequencies. Subjective testing showed that the seat-to-seat variation in quality was significantly reduced but the amount of virtual bass must be limited to avoid the perception of distortion.
Creating interoperability between two standards-based network technologies, IEEE 1394 Firewire and Ethernet Audio/Video Bridging (AVB), allows them to exist simultaneously in a single application configuration. While both technologies provide the transport of synchronized, low-latency, real-time audio and video data, they have different approaches to enabling this transport. By using a compatible audio gateway with a common control protocol, audio devices on these disparate networks can be connected and controlled.
In many applications it would be useful to estimate the location of a loudspeaker in a listening room or concert hall directly from the room impulse response. A novel method, based on combining the information from time-difference of arrival (TDOA) and time-of arrival (TOA), is proposed. The TOA-based method has a large variance in direction estimates, and the TDOA-based method has a large variance in distance estimation. Experiments show that the combined method is more accurate than either method alone by as much as a factor of two. Only a single parameter needs to be optimized.
Because of strong interest in designing audio systems for automobiles, an audio laboratory installed in a real car has been developed using professional equipment. The proposed method aims to create a foundation for a holistic end-to-end approach for real-time embedded system design (hArtes) using the modern algorithm tools and reconfigurable hardware. Proof-of-concept tests show that the equivalent results are achieved with significantly less time and effort.
Standards and Information Documents
AES Standards Committee News
AESSC chair; digital audio measurements; digital audio interfaces; audio file transfer and exchange; audio applications of networks; digital XLR connector; digital timing and synchronization; unique identifiers in AES3; ATM cells over Ethernet; connectors for loudspeaker patch panels; audio over ATM networks; audio exchange disk format; conservation of polarity; personal computer audio quality measurements
43rd Conference Report, Pohang
The growth in computer power over the past decade has enabled remarkable possibilities for the automatic interpretation of audio signals. As human listeners we are able to make all sorts of conscious and unconscious interpretations of what we hear, from the recognition of instruments and voices within a complex texture through the extraction of melodic and chordal progressions to the inference of emotional mood or cultural associations. All of this is based on listening to a single mixed stream of sound that is just a messy waveform. If we are lucky there may be some spatial information involving the reception of more than one related stream from different directions, but at best we only have two ears no matter how many sources there are. Enabling machines to make sense of mixed audio streams was something close to the realms of science fiction not so long ago, but the latest research in semantic audio analysis brings it within our grasp.
132nd Call for Papers, Budapest
48th Call for Papers, Munich