Chair: Lorenzo Picinali
Immersive sandboxes for music creation in Virtual Reality (VR) are becoming widely available. Some sandboxes host Virtual Reality Musical Instruments (VRMIs), but usually only the basic components, such as oscillators, sample-based instruments, or simplistic step-sequencers. In this paper, after describing MuX (a VR sandbox) and its basic components, we present new elements developed for the environment. We focus on the lumped and distributed physically-inspired models for sound synthesis. A simple interface was developed to control the physical models with gestures, expanding the interaction possibilities within the sandbox. A preliminary evaluation shows that, as the number and complexity of the components increase, it becomes important to provide to the users ready-made machines instead of allowing them to build everything from scratch.
Interactive auralization workflows in games and virtual reality today employ manual markup coupled to designer specified acoustic effects that lack spatial detail. Acoustic simulation can model such detail, yet is uncommon because realism often does not perfectly align with aesthetic goals. We show how to integrate realistic acoustic simulation while retaining designer control over aesthetics. Our method eliminates manual zone placement, provides spatially smooth transitions, and automates re-design for scene changes. It proceeds by computing perceptual parameters from simulated impulse responses, then applying transformations based on novel modification controls presented to the user. The result is an end-to-end physics-based auralization system with designer control. We present case studies that show the viability of such an approach.
Virtual Reality (VR) systems have been intensely explored, with several research communities investigating the different modalities involved. Regarding the audio modality, one of the main issues is the generation of sound that is perceptually coherent with the visual reproduction. Here, we propose a pipeline for creating plausible interactive reverb using visual information: first, we characterize real environment acoustics given a pair of spherical cameras; then, we reproduce reverberant spatial sound, by using the estimated acoustics, within a VR scene. The evaluation is made by extracting the room impulse responses (RIRs) of four virtually rendered rooms. Results show agreement, in terms of objective metrics, between the synthesized acoustics and the ones calculated from RIRs recorded within the respective real rooms.
Resonance Audio is an open source project designed for creating and controlling dynamic spatial sound in Virtual & Augmented Reality (VR/AR), gaming or video experiences. It also provides integrations with popular game development platforms and digital audio workstations as a preview plugin. Resonance Audio binaural decoder is used in YouTube VR to provide cinematic spatial audio experiences. This paper describes the core sound spatialization algorithms used in Resonance Audio.