The Journal of the Audio Engineering Society — the official publication of the AES — is the only peer-reviewed journal devoted exclusively to audio technology. Published 10 times each year, it is available to all AES members and subscribers.
The Journal contains state-of-the-art technical papers and engineering reports; feature articles covering timely topics; pre and post reports of AES conventions and other society activities; news from AES sections around the world; Standards and Education Committee work; membership news, patents, new products, and newsworthy developments in the field of audio.
Authors:Favrot, Alexis; Faller, Christof
Affiliation:Illusonic GmbH, Uster, Switzerland
Audio corresponding to the moving picture of a virtual reality (VR) camera can be recorded using an A or B-format microphone. The recorded signal is decoded with respect to the look direction for generating binaural or multichannel audio following the visual scene. Postproduction possibilities on given B-format signals are restricted to linear matrixing and filtering, limiting the use of recorded B-format contents. A time-frequency adaptive method is presented, which can apply equalization to different sources (directions) in the B-format without a need for decoding it. The proposed approach estimates direct sound power in each direction to be equalized and applies a single Wiener filter per B-format channel for individually equalizing all desired directions at once.
Download: PDF (HIGH Res) (1.3MB)
Download: PDF (LOW Res) (707KB)
Authors:Schepker, Henning; Denk, Florian; Kollmeier, Birger; Doclo, Simon
Affiliation:Department of Medical Physics and Acoustics and Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
In hearing devices, hear-through features that aim to provide the user with acoustic awareness of their surroundings are becoming increasingly popular. In particular, awareness of the user's surroundings can be achieved when the open ear properties can be perceptually restored with the device inserted, typically called acoustic transparency. In this study, we investigate the perceptual sound quality of six commercial consumer hearing devices and two research hearing devices with hear-through features. We conducted two experiments in which normal-hearing participants rated the perceptual sound quality of different audio signals processed by the hearing devices. In Experiment 1, the participants were not provided with an explicit open-ear reference, while in Experiment 2, the open-ear reference was explicitly provided. Results show that most commercial consumer hearing devices are not able to achieve a perceptual sound quality comparable to the open ear. Furthermore, results indicate that a main contributing factor to the overall quality of a hear-through feature is determined by the similarly of the transfer function with the device inserted and the open ear transfer function.
Download: PDF (HIGH Res) (5.0MB)
Download: PDF (LOW Res) (1.3MB)
Authors:Denk, Florian; Schepker, Henning; Doclo, Simon; Kollmeier, Birger
Affiliation:Department of Medical Physics and Acoustics & Cluster of Excellence Hearing4All, Universi Oldenburg, Germany
An increasing number of earphones and other hearing devices contain functionalities that are based on a so-called hear-through feature, which allows the user to hear the acoustic environment through the device. Ideally, the user would perceive the hear-through sound identical to listening with the open ear, which is referred to as acoustic transparency. In technical terms, this means that the sound transmission to the eardrum should be as similar as possible between the open ear and through the device. In this study, we evaluate the acoustic transparency of the hear-through feature of seven commercial hearables as well as two research hearing devices by means of technical measurements on a dummy head. A variety of artefacts, including frequency response deviations, comb filtering artefacts, and destruction of spatial cues, were revealed and quantified, and surprisingly large differences between current devices are noted. The corresponding subjective sound quality has been assessed in a companion study.
Download: PDF (HIGH Res) (3.8MB)
Download: PDF (LOW Res) (2.6MB)
Authors:Salmon, François; Hendrickx, Étienne; Épain, Nicolas; Paquier, Mathieu
Affiliation:Institute of Research and Technology, 1219 Avenue des Champs Blancs, 35510 Cesson-Sévigné, France; University of Brest, CNRS, Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France
Few studies have investigated the influence of visual cues on sound space perception beyond the influence of visual cues on sound source position. Previous studies suggest that the perception of late reflections is not affected by the visual impression of a room; however, only a limited number of spatial sound attributes were investigated. In the present paper, audiovisual interactions were examined without making assumptions on the number and nature of perceptual dimensions involved in the perception of sound space. In a virtual environment that employed a Head Mounted Display and dynamic binaural playback, subjects were asked to judge the perceived dissimilarity between sound spaces while watching the same visual stimulus. Pairwise comparisons were repeated using multiple visual conditions, including an audio-only condition. One sound source, a male voice reciting a poem, was considered in the listening test. It appeared that the visual modality did not impact the perceived differences between sound spaces.
Download: PDF (HIGH Res) (3.4MB)
Download: PDF (LOW Res) (491KB)
Authors:Moon, Soyoun; Park, Sunghwan; Park, Donggun; Yun, Myunghwan; Chang, Kyongjin; Park, Dongchul
Affiliation:Department of Industrial Engineering & Institute for Industrial System Innovation, Seoul National University, Seoul, Republic of Korea; Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul, Republic of Korea; Sound Design Research Lab, Hyundai Motors R & D Center, Hwaseong, Republic of Korea
Most previous studies on active sound design (ASD) development proposed regression models based on psychoacoustic parameters for engine sound design. However, order-based parameters are required for a real ASD development, considering that an ASD system is controlled by order levels. In this paper, we propose a regression model utilizing order-level–based parameters that can be efficiently applied to ASD development. A jury test was conducted for 27 engine sound recordings using 36 participants with normal hearing ability to evaluate the level of affective adjectives. Then, acoustic parameters were measured from the engine sound recordings to identify the relationship between the adjectives and parameters. Finally, a regression model was derived through statistical analysis. The properties of the model were compared with those of models proposed in previous studies to verify its superiority. The proposed regression model can reduce the time and effort required for ASD development.
Download: PDF (HIGH Res) (2.8MB)
Download: PDF (LOW Res) (641KB)
Affiliation:KLIPPEL GmbH, Mendelssohnallee 30, 01277 Dresden, Germany
The value assigned by the end user to a loudspeaker, headphone, or any other audio device determines his purchase decision and the success of the product in the market. The paper investigates the relationship between end-user value, performance characteristics, cost structure, and the particular design. A model based on modified benefit-cost ratio is presented that describes the impact (sensitivity) of the performance characteristics on the end-user value. The performance sensitivity is a central and powerful term in audio engineering because it links physical, perceptual, and economical quantities. This new concept is applied to all phases of the product life and addresses open questions such as defining the optimum target performance, selecting design choices, increasing the yield rate in production, and ensuring reliability and quality in the final application.
Download: PDF (HIGH Res) (1.5MB)
Download: PDF (LOW Res) (601KB)
Author:Anderson, Ian Z.
Affiliation:Kent State University at Stark, North Canton, Ohio
This paper provides a number of comparative, quantitative evaluations of 10 different makes and models of electrolytic capacitors. Models range from expensive parts specified for use in audio circuits to low-cost general-purpose parts. The datasets comprise out-of-circuit electronic measurements, total harmonic distortion (THD) fast Fourier transform (FFT) sweeps, and cumulative distortion products resulting from 31-tone stimulus performed on the components in a circuit designed to emulate a typical line-level audio recording and mixing console. Results are examined in an effort to identify any measurable properties that may distinguish "audio capacitors" as outliers from their general-purpose counterparts.
Download: PDF (HIGH Res) (3.9MB)
Download: PDF (LOW Res) (776KB)
During the extremely successful AES Virtual Vienna convention, held online during June instead of the in-person event in Vienna that had been planned, 360° audio and VR production were key themes. A panel discussion on binaural audio, chaired by Tom Ammerman, and a tutorial on "Sound for Extreme 360° Productions" given by Martin Rieger, are summarized.
Download: PDF (204KB)