The Journal of the Audio Engineering Society — the official publication of the AES — is the only peer-reviewed journal devoted exclusively to audio technology. Published 10 times each year, it is available to all AES members and subscribers.
The Journal contains state-of-the-art technical papers and engineering reports; feature articles covering timely topics; pre and post reports of AES conventions and other society activities; news from AES sections around the world; Standards and Education Committee work; membership news, patents, new products, and newsworthy developments in the field of audio.
Affiliation:University of Applied Sciences, Wolfenbüttel, Germany
In this study, a receiver-based packet loss recovery algorithm for streaming audio is proposed that is based on the FTR algorithm (Frequency Tracking). Some enhancements are proposed to exploit the tradeoff between the recovery quality and the computational complexity with higher packet loss rate. This new enhanced frequency tracker (EFTR) algorithm uses the information from the next packet to improve the quality of the reconstructed audio signal. The conclusions are verified by experiments and by evaluating the quality of the proposed enhancements and processing time. The objective test results measured with the PEAQ (Perceptual Evaluation of Audio Quality) software shows that the proposed EFTR algorithm performs with better quality than the original FTR algorithm. The improvement with a high packet loss rate of 10% is especially significant for single instruments (e.g. organ) and soprano-orchestra combination. An improvement in the ODG (Objective Difference Grade) from -1.84 to -0.4 is observed, corresponding from slightly annoying to barely noticeable. The processing time is also reduced with EFTR. For example for the loss rate 10% and the music title organ, the processing time is shortened by 85% from 55.8 to 8.2 sec.
Download: PDF (HIGH Res) (3.8MB)
Download: PDF (LOW Res) (494KB)
Authors:Straube, Florian; Schultz, Frank; Makarski, Michael; Weinzierl, Stefan
Affiliation:Audio Communication Group, TU Berlin, Berlin, Germany; Institute for Acoustics and Audio Technique, Würselen, Germany
Line source arrays (LSAs) are used for large-scale sound reinforcement that synthesizes homogeneous sound fields over the full audio bandwidth. The deployed loudspeaker cabinets are rigged with different tilt angles and are electronically controlled to provide the intended coverage of the audience zones and to avoid radiation toward the ceiling, reflective walls, or residential areas. In this article, a mixed analytical-numerical approach, referred to as line source array venue slice drive optimization (LAVDO), is introduced for optimizing the individual loudspeakers’ driving functions. This method is compared to numerical optimization schemes, including least-squares and multi-objective goal attainment approaches. For two standard LSAs in straight and in curved configuration, these temporal frequency domain optimizations are performed for a typical concert venue. It is shown that LAVDO overcomes the nonsmooth frequency responses resulting from numerical frequency domain approaches. LAVDO provides smooth amplitude and phase responses of the loudspeakers’ driving functions that are essential for practical finite impulse response filter design and implementation.
Download: PDF (HIGH Res) (1.2MB)
Download: PDF (LOW Res) (586KB)
Authors:Yang, Jiajun; Hermann, Thomas
Affiliation:Ambient Intelligence Group, Citec, Bielefeld University, Bielefeld, Germany
Exploratory Data Analysis (EDA) refers to the process of detecting patterns of data when explicit knowledge of such patterns within the data is missing. Because EDA predominantly employs data visualization, it remains challenging to visualize high-dimensional data. To minimize the challenge, some information can be shifted into the auditory channel using humans’ highly developed listening skills. This paper introduces Mode Explorer, a new sonification model that enables continuous interactive exploration of datasets with regards to their clustering. The method was shown to be effective in supporting users in the more accurate assessment of cluster mass and number of clusters. While the Mode Explorer sonification aimed to support cluster analysis, the ongoing research has the goal of establishing a more general toolbox of sonification models, tailored to uncover different structural aspects of high-dimensional data. The principle of extending the data display to the auditory domain is applied by augmenting interactions with 2D scatter plots of high-dimensional data with information about the probability density function.
Download: PDF (HIGH Res) (3.0MB)
Download: PDF (LOW Res) (317KB)
Authors:Ko, Doyuen; Woszczyk, Wieslaw
Affiliation:Belmont University, Nashville, TN, USA; Centre for Interdisciplinary Research in Music, Media and Technology, McGill University, Montreal, Quebec, Canada
Musicians’ ability to perceive and present their unique sound is greatly influenced by the acoustical properties of a given performance space. Musical properties such as dynamics, timbre, intonation, and tempo are closely dependent on spatial acoustics. As a consequence, the musical score and orchestration are often affected by the acoustic environment in which the composer intends to have the music performed. An experimental virtual acoustic system at McGill University located in a large scoring stage was used to examine the effect of spatial acoustics on musicians’ assessments of spatial quality of the sound field enveloping them during performance. Three acoustic conditions presented distinct room acoustic characteristics, including reverberation time, clarity, stage support, early lateral energy, and interaural cross correlation coefficient. Eleven professional string quartets, 44 musicians, were invited to render subjective evaluations by responding to surveys after performing under each condition. Results showed a strong preference for virtual acoustics over the natural acoustics of the space. Factor analysis revealed three primary underlying perceptual dimensions: stage support, spatial impression, and tonal balance. “Quality of reverberation (naturalness),” “amount of reverberation,” “hearing other musicians,” and “height sensation” were salient attributes highly correlated with their preferences.
Download: PDF (HIGH Res) (5.1MB)
Download: PDF (LOW Res) (306KB)
Authors:Foss, Richard; Devonpor, Sean
Affiliation:Rhodes University, Grahamstown, South Africa
Immersive Sound is commonly used to create the localization of sound sources above, below, and around listeners. To achieve this immersive goal, sound systems are employing an ever-increasing number of speakers. Given that immersive sound systems have large speaker configurations such as in cinemas, theaters, museums, and home theater installations, there is a need to provide control for these various contexts and also to provide a means of automating this control. An immersive sound system has been created that allows for the real-time control over sound source localization. It is a multi-user client/server system where the client devices are mobile devices, thereby allowing remote control over sound source localization. Touch and orientation capabilities of mobile devices are used for the generation of three-dimensional coordinates. The server receives localization control messages from the client and uses an Ethernet AVB (audio video bridging) network to distribute appropriate mix levels to speakers with built-in signal processing. These localization messages can be recorded by users for later playback.
Download: PDF (HIGH Res) (2.8MB)
Download: PDF (LOW Res) (265KB)
The complexities of music compared with speech or noise give rise to a number of challenges when evaluating hearing loss or hearing-protection strategies. There is a psychological dimension to some aspects of hyperacusis that depends partly on the nature of the sound itself, and the effects of recreational sound exposure may differ from for noise in the workplace. Various types of hidden hearing loss may lie behind an otherwise healthy audiogram, requiring measurements of otoacoustic emissions or other tests to discover them. People with hearing aids do not necessarily get a good listening experience with music, as most of these devices are still optimized mainly for speech.
Download: PDF (223KB)