In This Section
Synchronized Swept-Sine: Theory, Application, and Implementation - October 2015
Effect of Microphone Number and Positioning on the Average of Frequency Responses in Cinema Calibration - October 2015
The Measurement and Calibration of Sound Reproducing Systems - July 2015
Journal of the AES
2010 May - Volume 58 Number 5
The proposed hearing-aid speech quality index accurately predicts speech quality ratings for hearing-impaired and normal-hearing listeners who were judging the quality of sound processed with a simulated hearing aid. The proposed index is composed of the product of two subindices: one of which predicts quality with additive noise, nonlinear processing, and distortion; and the other one predicts quality for linear filtering. The composite index is based on a cochlear model that incorporates elements of impaired hearing.
Creating private listening using loudspeakers can be achieved by spatial processing such that target regions of the space are made sonically bright (louder) while other regions are made sonically dark (quieter). When two bright regions corresponding to the left and right channels of a stereo signal are placed at the listener’s ears, private listening can be enjoyed without disturbing others in the space. The system is based on spatial contrast control feeding a linear loudspeaker array. Empirical results show that 20 dB of contrast is achieved in one-third-octave bands above 1 kHz.
Automated testing of amplifier efficiency is made fast and flexible by using an active load that can simulate various conditions. The proposed method uses a controlled load that can range from 4 to 50 ohms with a phase between –60 and +60 degrees for frequencies from 20 Hz to 20 kHz. This active load has been used in an automated test procedure to capture efficiency maps of class-B and class-D amplifiers over a full range of operating conditions. While test results showed that the class-D amplifier outperformed the class-B amplifier, real-world efficiency was significantly lower than might be expected because of power supply efficiency, as well as the power consumption of driving stages and peripheral circuits.
A music information retrieval system can extract information that arises from how various sound sources are panned between channels during the mixing and recording process. The authors propose augmenting standard audio features, which are based on the source music, with one of two methods for extracting panning and contrast features. These additional features provide statistically important information for nontrivial audio classifications tasks. Traditional classifications focus on information about pitch, rhythm, and timbre. Other types of mixing parameters are proposed for future work.
Standards and Information Documents
AES Standards Committee News
Project management; audio connectors; audio connector polarity; measurement of digital audio equipment
It is increasingly possible either to emulate legacy audio devices and effects or to create new ones using digital signal processing. Often these are implemented as plug-ins to digital audio workstation packages, using one of the proprietary systems such as VST, DirectX, Audio Suite, or Audio Units. Papers from a session chaired by David Berners at the AES 127th Convention last year in New York covered a wide range of recent innovations in this field, including Leslie speaker, plate reverb, and guitar amplifier emulation.