Chair: Angela McArthur
One of the tough but rewarding challenges of interactive audio synthesis is the continuous representation of reflecting and occluding objects in the simulated world. For decades it’s been normal for game engines to support two sorts of 3D geometry, for graphics and physics, but neither of those is well-suited for audio. This paper explains how the geometric needs of audio differ from those others, and describes techniques hit games have used to fill the gaps. It presents easily-programmed methods to tailor object-based audio, physics data, pre-rendered 3D audio soundfields and reverb characteristics to account for occlusion and gaps in the reverberant environment, including those caused by movements in the simulated world, or the collapse of nearby objects.
Reactive virtual acoustic environments (VAEs) that respond to any user-generated sound with an appropriate acoustic room response enable immersive audio applications with enhanced sonic interaction between the user and the VAE. This paper presents a reactive VAE that has two clear advantages when compared to other systems introduced so far: it generally works with any type of sound source, and the dynamic directivity of the source is adequately considered in the binaural reproduction. The paper describes the implementation of the reactive VAE and completes the technical evaluation of the overall system focusing on the recently added software components. Regarding use of the system in research, the study briefly discusses challenges of conducting psychoacoustic experiments with such a reactive VAE.
Music production has always been influenced by and evolved alongside the newest technological standards and listener demands. This paper discusses the 3D mix aesthetics of Ambisonics beyond 6th order taking a classical Turkish music production as a musical case. An ensemble recording was made in the recording studio of Istanbul Technical University (ITÜ) MIAM. The channels of that session were mixed on the High Density Loudspeaker Array in the Immersive Audio Lab of University 2, exploring generic ways of spatial music production. The results were rated by means of a survey grading immersive audio parameters.
Perhaps the most pervasive immersive format at present is 360º video, which can be panned whilst being viewed. Typically, such footage is captured with a specialist camera. Approaches and workflow for the creation of 3-D audio for this medium are seldom documented, and methods are often heuristic. This paper offers insight into such approaches, and whilst centered on post-production, also discusses some aspects of audio capture. This is done via a number of case studies that draw from the commercial work of immersive-audio company, 1.618 Digital. Although these case studies are unified by certain common approaches, they also include unusual aspects such as binaural recording of insects, sonic capture of moving vehicles and the use of drones.
Of the many sounds we encounter throughout the day, some stay lodged in our minds more easily than others; these may serve as powerful triggers of our memories. In this paper, we measure the memorability of everyday sounds across 20,000 crowd-sourced aural memory games, and then analyze the relationship between memorability and acoustic/ cognitive salience features; we also assess the relationship between memorability and higher-level gestalt features such as its familiarity, valence, arousal, source type, causal certainty, and verbalizability. We suggest that modeling these cognitive processes opens the door for human-inspired compression of sound environments, automatic curation of large-scale environmental recording datasets, and real-time modification of aural events to alter their likelihood of memorability.