AES Journal

Journal of the Audio Engineering Society

AES Journal

The Journal of the Audio Engineering Society — the official publication of the AES — is the only peer-reviewed journal devoted exclusively to audio technology. Published 10 times each year, it is available to all AES members and subscribers.

The Journal contains state-of-the-art review papers, technical papers, and engineering reports; standards committee work, convention and conference announcements, membership news, and book reviews.

 


Current Issue: 2024 May - Volume 72 Number 5


Guest Editors' Note -- Special Issue on Sonification

Authors: Ziemer, Tim; Hermann, Thomas; McMullen, Kyla; Höldrich, Robert

Page: 272

Download: PDF (41KB)

Review Papers

Sound Terminology in Sonification

Open Access

Open
Access

Author:
Affiliation:
Page:

Sonification research is intrinsically interdisciplinary. Consequently, a proper documentation of and interdisciplinary discourse about a sonification is often hindered by terminology discrepancies between involved disciplines, i.e., the lack of a common sound terminology in sonification research. Without a common ground, a researcher from one discipline may have trouble understanding the implementation and imagining the resulting sound perception of a sonification, if the sonification is described by a researcher from another discipline. To find a common ground, the author consulted literature on interdisciplinary research and discourse, identified problems that occur in sonification, and applied the recommended solutions. As a result, the author recommends considering three aspects of sonification individually, namely 1) Sound Design Concept, 2) Objective, and 3) Evaluation, clarifying which discipline is involved in which aspect and sticking to this discipline's terminology. As two requirements of sonifications are that they are a) reproducible and b) interpretable, the author recommends documenting and discussing every sonification design once using audio engineering terminology and once using psychoacoustic terminology. The appendixes provide comprehensive lists of sound terms from both disciplines, together with relevant literature and a clarification of often misunderstood and misused terms.

  Download: PDF (HIGH Res) (291KB)

  Download: PDF (LOW Res) (206KB)

  Be the first to discuss this reviewPaper

Papers

A Natural Sonification Mapping for Handwriting

Open Access

Open
Access

Authors:
Affiliation:
Page:

The sonification of handwriting has been shown effective in various learning tasks. In this paper, the authors investigate the sound design used for handwriting interaction based on a simple and cost-efficient prototype. The authentic interaction sound is compared with physically informed sonification designs that employ either natural or inverted mapping. In an experiment, participants copied text and drawings. The authors found simple measures of the structure-borne audio signal that showed how participants were affected in their movements, but only when drawing. In contrast, participants rated the sound features differently only for writing. The authentic interaction sound generally scored best, followed by a natural sonification mapping.

  Download: PDF (HIGH Res) (2.7MB)

  Download: PDF (LOW Res) (604KB)

  Be the first to discuss this paper

Multichannel Dynamic Sound Rendering and Echo Suppression in a Room Using Wave Field Synthesis

Authors:
Affiliation:
Page:

With multichannel sound rendering systems, an immersive sound can be experienced by a broader audience. Rendering of a stationary source is widely studied in the literature. However, a dynamic sound, where the source's position is continuously changing, provides improved source localization and a listener experience more immersed in the rendered sound. This paper investigates the multichannel sound rendering of a dynamic sound source using a wave field synthesis method in a reverberant shoe-box room model. Finite difference time domain is employed to visualize the reproduced sound field in the time domain. To address the challenge of room echo-induced degradation in sound quality, a method is presented inwhich the reflected sound is compensated by rendering the primary source's images from the auxiliary actuators, enhancing SNR by approximately 2--3.5 dB.

  Download: PDF (HIGH Res) (8.2MB)

  Download: PDF (LOW Res) (1.0MB)

  Be the first to discuss this paper

Speech, Nonspeech Audio, and Visual Interruptions of a Tracking Task: A Replication and Extension of Nees and Sampsell (2021)

Authors:
Affiliation:
Page:

Interruptions from technology---such as alerts from mobile communication devices---are a pervasive aspect of modern life. Interruptions can be detrimental to performance of the ongoing, interrupted task. Designers often can choose whether interruptions are delivered as visual or auditory alerts. Contradictory theories have emerged regarding whether auditory or visual alerts are more harmful to performance of ongoing visual tasks. Multiple Resources Theory predicts better overall performance with auditory alerts, but Auditory Preemption Theory predicts better overall performance with visual alerts. Nees and Sampsell previously found that multitasking was superior with nonspeech auditory alerts as compared to visual alerts. In the current experiment, their methods were replicated and extended to include a speech auditory alerts condition. Performance of the ongoing tracking task was worse with interruption from visual alerts, and perceived workload also was highest in this condition. Reaction time to alerts was fastest with visual alerts. There also was converging evidence to suggest that performance with speech alerts was superior to performance with nonspeech tonal alerts. The current experiment replicated the results of Nees and Sampsell and extended their findings to speech alert sounds. Like in their study, the pattern of findings here support Multiple Resources Theory over Auditory Preemption Theory.

  Download: PDF (HIGH Res) (1.8MB)

  Download: PDF (LOW Res) (387KB)

  Be the first to discuss this paper

Spatial Audio and Parameter Mapping Experiments for the Auditory Display of Coral Bleaching Data

Author:
Affiliation:
Page:

In this article, data relating to Hawaii's 2019 coral bleaching are auditorily displayed using parameter mapping sonification and ambisonics. Although primarily an explorative endeavor, this undertaking is conceptually rooted in ecological sound art and neutrally positioned on an established Cartesian framework known as Aesthetic Perspective Space. Through iterative design, different versions are implemented using sound surrogates of coral reefs' natural soundscapes derived from either (a) real undersea recordings of reef environments or (b) modeling via means of sound synthesis. These audio surrogates correspond to data clusters aggregated by geographical location and, after being represented as sound sources on an ambisonic sphere, all contribute to a final sonic environment in which each cluster is progressively altered as the corresponding coral location undergoes bleaching. To assess both the perceived aesthetic placement of these experiments and their potential for public engagement in the discourse around climate change, evaluation studies are carried out. Results align with the aesthetic goals of the author, while consecutive versions manage to improve on critical fronts relating to climate awareness, providing further motivation toward more immersive implementations in future editions to come.

  Download: PDF (HIGH Res) (5.7MB)

  Download: PDF (LOW Res) (552KB)

  Be the first to discuss this paper

Connecting Sound to Data: Sonification Workshop Methods With Expert and Non-Expert Participants

Open Access

Open
Access

Authors:
Affiliation:
Page:

Sonification and sonic interaction design aim to create meaningful displays and digital interactions using data and information from the most disparate fields (astronomy, finance, health, and security, for example) as the basis of the design. To date, there are no standards and conventions on how to meaningfully link data to sound; therefore, designers develop these connections on a case-by-case basis. Participatory workshops that target end users and domain experts are a way for sound designers to find meaningful connections between data and sounds at the start of the design process so that final outcomes are more likely to be effective and accepted by users. In this paper, the authors present and discuss the participatory workshop methods they have developed within the Sound for Energy project. In particular, they will highlight the aspects that can be easily transferable to other target domains. With this, the authors contribute to the effort of making sonification and sonic interaction design a more viable and accepted alternative to traditional, usually visual, displays.

  Download: PDF (HIGH Res) (3.4MB)

  Download: PDF (LOW Res) (749KB)

  Be the first to discuss this paper

Toward an Auditory Virtual Observatory

Authors:
Affiliation:
Page:

The large ecosystem of observations generated by major space telescope missions can be remotely analyzed using interoperable virtual observatory technologies. In this context of astronomical big data analysis, sonification has the potential of adding a complementary dimension to visualization, enhancing the accessibility of the archives, and offering an alternative strategy to be used when overlapping issues are found in purely graphical representations. This article presents a collection of sonification and musification prototypes that explore the case studies of the MILES and STELIB stellar libraries from the Spanish Virtual Observatory and the Kepler and TESS light curve databases from the Space Telescope Science Institute archive. Using automation and deep learning algorithms, it offers a "palette" of resources that could be used in future developments oriented toward an auditory virtual observatory proposal. The work includes a user study with quantitative and qualitative feedback from specialized and nonspecialized users analyzing the use of sine waves and musical instrument mappings for revealing overlapped lines in galaxy transmission spectra, confirming the need for training and prior knowledge for the correct interpretation of accurate sonifications, and providing potential guidelines to inspire future designs of widely accepted auditory representations for outreach purposes.

  Download: PDF (HIGH Res) (2.8MB)

  Download: PDF (LOW Res) (526KB)

  Be the first to discuss this paper

Letting Pulsars Sing: Sonification With Granular Synthesis

Open Access

Open
Access

Author:
Affiliation:
Page:

An astronomy sonification project has been initiated to create sound and music from the data of pulsars in space. Pulsars are formed when some stars burn out all of their fuel and emit electromagnetic radiation, which hits earth periodically as the pulsar rotates. Each pulsar has unique characteristics. The source of the data is the online Pulsar Catalog from the Australian National Telescope Facility. The first result is a stereo fixed media composition, From Orion to Cassiopeia, which reveals a sweep of much of the Milky Way, displaying audio for many of the known pulsars. Galactic longitude, rotation speed, pulse width, mean flux density, age, and distance are mapped to granular synthesis parameters. Sound event duration, amplitude, amount of reverberation, grain rate, grain duration, grain frequency, and panning are controlled by the data. The piece was created with the new SGRAN2() instrument in the RTcmix music programming language.

  Download: PDF (HIGH Res) (8.0MB)

  Download: PDF (LOW Res) (777KB)

  Be the first to discuss this paper

Standards and Information Documents

AES Standards Committee News

Page: 360

Download: PDF (42KB)

Departments

Conv&Conf

Page: 364

Download: PDF (119KB)

Extras

Table of Contents

Download: PDF (47KB)

Cover & Sustaining Members List

Download: PDF (61KB)

AES Officers, Committees, Offices & Journal Staff

Download: PDF (127KB)

AES - Audio Engineering Society