In This Section
Journal of the Audio Engineering Society
The Journal of the Audio Engineering Society — the official publication of the AES — is the only peer-reviewed journal devoted exclusively to audio technology. Published 10 times each year, it is available to all AES members and subscribers.
The Journal contains state-of-the-art technical papers and engineering reports; feature articles covering timely topics; pre and post reports of AES conventions and other society activities; news from AES sections around the world; Standards and Education Committee work; membership news, patents, new products, and newsworthy developments in the field of audio.
2015 January/February - Volume 63 Number 1/2
Guest Editors’ Note
A mock-up of an airplane cabin can be a good simulation tool for the study, prediction, demonstration, and jury testing of interior aircraft sound quality. The authors used real flight recordings created with vibrational and microphone transducer arrays in combination with multichannel equalization with least-mean-square optimization techniques. The paper presents physical evaluations of reproduced sound fields on the basis of real flight recordings using an 80-channel microphone array and 41 reproduction sources in the cabin mock-up. To provide a faithfully reproduced sound environment, time, frequency, and spatial characteristics should be preserved. Results are satisfactory given the various limitations and constraints.
Digital equalization of audio systems is mostly performed on a channel-by-channel basis, where loudspeakers are equalized separately and independently of each other. The authors present a multichannel room-correction approach where the loudspeakers to be equalized are assisted by a set of support loudspeakers acting in combination to improve the response of the main loudspeakers while suppressing the reverberation of the listening space. By the introduction of a focus control parameter, room acoustics can be suppressed to a higher or lesser degree in a specified frequency band. Full dereverberation may not always be desirable.
This research paper considers the generation of actively compensated 2D sound fields in a semireverberant room using an array of five third-order loudspeakers that exploit room reflections to improve reproduction quality. The loudspeakers and the calibration microphone were cylindrical designs since that geometry is less expensive than spherical designs and better suited to 2D reproduction. The results show that active compensation using higher order loudspeakers can produce sound fields with attenuated early reflections and reduced early reverberation over a radius of 90 mm, large enough for a single listener. Furthermore, third-order sources produced greater suppression of early reflections than zeroth-order sources; the frequency response could be equalized over a wider range.
This article presents an analytical approach for the description and synthesis of moving virtual sound sources. Because of the complexity of spatially distributed Doppler shifts, the physical synthesis of sound fields generated by moving virtual sources has been the subject of extensive research in the last ten years. Based on the spectral description of a moving source, an analytical expression was formulated for the frequency and wavenumber content of a virtual monopole traveling in an arbitrary horizontal direction. Based on the wavenumber content explicit synthesis driving functions were obtained using the Spectral Division Method. By utilizing the spectral description and applying the method of stationary phase, a compact analytical formula was found for the Wave Field Synthesis of moving sources with linear secondary source distribution.
Even with the more advanced sound rendering methods, creating a convincing near-field image has remained a challenge, especially when sound is integrated with high resolution video. In order to render a near-field sound image in a relatively simple yet effective way, the authors proposed a new method using an overhead planar loudspeaker. Subjective evaluation showed that planar waves radiating from overhead position generated a sound image very near to the listener when coupled with an additional filter that removes spectral cues associated with an overhead sound source. The results also showed that the proposed method could continuously control the distance of sound image between the screen and the listener position. The planar loudspeaker generated smaller variance of group delay at the listener's ears than conventional loudspeakers.
There are many situations in which multiple audio programs are replayed over loudspeakers in the same acoustic environment, allowing listeners to focus on their desired target program. Where this situation is deliberately created and the different program items are centrally controlled, each listener can be viewed as having a personal sound zone system. In order to evaluate and optimize such situations in a perceptually relevant manner, the authors created a predictive model using the features that contribute to the distraction from unwanted sounds. Feature extraction was motivated by a qualitative analysis of subject responses. Distraction ratings were collected for one hundred randomly created audio-on-audio interference situations with music target and interferer programs. The selected features were related to the overall loudness, loudness ratio, perceptual evaluation of audio source separation, and frequency content of the interferer. The model was found to predict accurately for the training and validation datasets.
Sound zone systems aim to control sound fields in such a way that multiple listeners can enjoy different audio programs within the same room with minimal acoustic interference. Often, there is a trade-off between the acoustic contrast achieved between the zones and the fidelity of the reproduced audio program in the target zone. A listening test was conducted to obtain subjective measures of distraction, target quality, and overall quality of listening experience for ecologically valid programs within a sound zoning system. Sound zones were reproduced using acoustic contrast control, planarity control, and pressure matching applied to a circular loudspeaker array. The highest mean overall quality was a compromise between distraction and target quality. The results showed that the term “distraction” produced good agreement among listeners, and that listener ratings made using this term were a good measure of the perceived effect of the interferer.
Adaptive Amplitude and Delay Control for Stereophonic Reproduction that Is Robust against Listener Position Variations
With stereo reproduction, sound images are correctly located when the listener is located in a small area, called the sweet spot. When the listener is laterally away from this ideal location, the observed interaural level differences, interaural time differences, and interaural phase differences do not match the listener’s assumptions because of unequal distances from the listener to the respective loudspeakers. In order to provide better localization beyond the sweet spot, the proposed system detects the listener’s position to adaptively control the amplitude ratio and delay difference of the two loudspeakers. Results of subjective experiments using the proposed method demonstrate that sound images are localized more accurately than without such a system.
Measurements and Visualization of Sound Intensity Around the Human Head in Free Field Using Acoustic Vector Sensor
This research determined the vector sound intensity around a human-head simulation in a free field with different loudspeaker configurations. Data was collected using a Cartesian robot, velocity transducers based on hot-wire anemometry, and sound processing software. The resulting visualized sound field shows the influence of obstacles in the radiation path, the role of scattering reflections, and phase-amplitude relationships. Observation of acoustic wave distribution shows that phenomena occurring in the acoustic field are more complex than typically shown in acoustic field simulations. Wave phenomena such us diffraction and interference are clearly visible. Visualizing the vector sound intensity around the human head provides additional insight into sound reproduction in complex spaces.
Standards and Information Documents
AES Standards Committee News
Digital input-output interfacing; synchronization of digital audio; spatial acoustic data file format; loudspeaker polar radiation measurements
As in most other areas of life, the Internet is becoming the dominant way by which people access broadcast content. It is also taking over rapidly from older methods used for contributing content to broadcast studios such as ISDN and microwave links. This requires careful management of the connections, as the network is no longer private to the broadcast users but is shared with massive quantities of other data.
Technical Committees are centers of technical expertise within the AES. Coordinated by the AES Technical Council, these committees track trends in audio in order to recommend to the Society papers, workshops, tutorials, master classes, standards, projects, publications, conferences, and awards in their fields. The Technical Council serves the role of the CTO for the society. Currently there are 22 such groups of specialists within the council. Each consists of members from diverse backgrounds, countries, companies, and interests. The committees strive to foster wide-ranging points of view and approaches to technology. Please go to: http://www.aes.org/technical/ to learn more about the activities of each committee and to inquire about membership. Membership is open to all AES members as well as those with a professional interest in each field.
59th Conference, Montreal, Call for Papers
139th Convention, New York, Call for Papers
60th Conference, Leuven, Call for Papers