Events

2016 AES International Conference on Audio for Virtual and Augmented Reality: Friday Sessions, Theater

Home  | Program | Keynotes | Venue | Committee | Sponsorship | Registration

 

 

Program Outline Day 1   |   Program Outline Day 2 

  

Click to download a detailed program including abstracts

 

Our dedicated AVAR Conference Registration Desk on the lower level will be open from 2:00 PM through 6:00 PM on
Thursday, September 29 for badge collection. Follow the signs from the exterior of the West Hall and the parking areas.
This desk and another outside the Theater on Level Two will be manned during each day of the conference.

 

Friday, September 30 - Sessions in Theater

 

 

Title

Abstract

8:30 -
9:00 AM

Opening Comments

 

9:00 -

9:30 AM

Opening Keynote:
The Journey into Virtual and Augmented Reality

 

by Philip Lelyveld - VR/AR Initiative
Program Manager, USC Entertainment Technology Center

Virtual, Augmented, and Mixed Reality have the potential of delivering interactive experiences that take us to places of emotional resonance, give us agency to form our own experiential memories and become part of the everyday lives we will live in the future. Philip Lelyveld will define what Virtual, Augmented, and Mixed Reality are, present recent developments that will shape how they will potentially impact entertainment, work, learning, social interaction, and life in general, and raise rarely-mentioned but important issues that will impact how VR/AR/MR is adopted. Just as TV programming progressed from live broadcasts of staged performances to today’s very complex language of multithread long-form content, so VR/AR/MR will progress from the current ‘early days’ of projecting existing media language with a few tweaks into a headset experience to a new VR/AR/MR-specific language that both the creatives and the audience understand. Philip's goal is to bring you up to speed on the current state, the potential, and the known barriers to adoption of Virtual, Augmented, and Mixed Reality.

9:30 AM

Break

 

9:45 -

11:15 AM

 

Tutorial 1:
Audio Recording and Production for Virtual Reality/360° Applications


Chaired by
Jan Plogsties - Fraunhofer Institute

 

At IBC, AES 139th, CES and other events Virtual Reality has been a huge topic. VR producers more and more realize the potential and need for spatial audio processing for VR applications. This workshop will discuss the following topics: 1. How to record audio for 360° video? - Can we use the same techniques as for Movie/TV productions? Does a B-format mic do the trick?
2. How to mix audio for 360/VR? - Channels, Object or ambisonics, a combination, or binaural? Plug-ins and production tools, How to monitor?
3. How to deliver audio for different VR applications? - What codecs and formats are there ?
4. How to render audio for 360/VR? - Headphone rendering for VR glasses - What are resolution and latency requirements?
5. What quality aspects are important? - Accuracy and plausibility, interaction with video?

11:15 AM

Break

 

11:30 AM -

12:30 PM

Tutorial 2:
Creating Immersive & Aesthetic Auditory Spaces for Virtual and Augmented Reality


Presented by Chanel Summers - University
of Southern California and Syndicate 17 LLC

This presentation will discuss the challenges and provide specific solutions for creating audio within interactive virtual and augmented reality experiences. Audio techniques will be revealed that can be used today to advance storytelling and gameplay in virtual environments while creating a cohesive sense of place. Processes and techniques will be demonstrated for use in the creation of soundscapes in shipping products, ranging from immersive mixed reality experiences to multi-participant, multi-site, location-based games.

12:45 -
1:45 PM

Creating Scientifically Valid Spatial Audio for VR and AR: Theory, Tools and Workflows


Ramani Duraiswami and Adam O'Donovan - VisiSonics Corp.


The goal of VR and AR is to immerse the user in a created world by fooling the human perceptual system into perceiving rendered objects as real. This must be done without the brain experiencing fatigue: accurate audio representation plays a crucial role in achieving this. Unlike vision with a narrow foveated field of view, human hearing covers all directions in full 3D. Spatial audio systems must provide realistic rendering of sound objects in full 3D to complement stereo visual rendering. We will describe several areas of our research, initially conducted at the University of Maryland over a decade, and since at VisiSonics, that led to the development of a robust 3D audio pipeline which includes capture, measurement, mathematical modeling, rendering and personalization. The talk will also demonstrate workflow solutions designed to enrich the audio immersion for the gaming, video post-production and capture in VR/AR.

2:00 -

3:30 PM

Tutorial 3:
Spatial Audio and
Sound Propagation for VR: New Developments, Implementations, and Integration

Chaired by Dr. Dinesh Manocha -
Univ. of North Carolina
@ Chapel Hill
Participants: Dr. Anish Chanda - Impulsonic Inc, Neil Wakefield - Linden Lab

In this tutorial, we give an overview of recent research and tools for immersive spatial audio and sound propagation effects for VR. We also discuss sound design and integration aspects of adding these propagation techniques and capabilities to a massive VR platform, Project Sansar from Linden Lab.
Spatial audio is important to maintain audio-visual coherence in VR for increased realism and better sense of presence. It refers to 3D audio and environmental effects like sound occlusion, reflection, diffraction, and reverberation. However, quickly simulating spatial audio to update with orientation and positional changes in VR is computationally challenging.
We will give an overview of research techniques that have been developed in the last 10 years for efficiently modeling spatial audio for complex VR worlds. There will also be a hands on tutorial using Phonon, that has implemented many of these state of the art spatial audio algorithms. Phonon integrates with a wide variety of game engines and audio engines and we use these applications to demonstrate the performance of these novel spatial audio algorithms. We also demonstrate how 3D audio effect can be applied to environmental effects in spatial audio. Finally, we will discuss various sound design considerations when adding spatial audio for VR as well as practical challenges and considerations when adding spatial audio into a large VR platform, especially with regards to making spatial audio tools accessible for untrained users or content creators.

3:30 PM

Break

 

3:45 -

4:30 PM

Tutorial 4:
3D Audio Post-Production Workflows
for VR

Presented by Viktor Phoenix & Scott Gershin - Technicolor

Overview of solutions to some of the creative and practical challenges encountered in the audio post-production pipeline for 360 videos and Virtual Reality. We discuss methodologies for monitoring, editing, designing, mixing, mastering, and delivering audio for VR and 360 Videos. We discuss how to integrate a 3D audio workflow into existing post-production pipelines by merging best practices from Games, Television, and Feature Film along with new strategies for this emerging medium. The future of content delivery and playback is considered while still respecting current infrastructures for delivering client projects and accommodating the variety of delivery formats required for various virtual reality and 360 video platforms.

4:30 PM

Break

 

4:45 -

6:15 PM

Tutorial 5:
How Can Audiology and Hearing Science Inform AVAR, and Vice Versa?
 

Chaired by Dr. Chris Stecker - Vanderbilt University School of Medicine
Participants: Dr. Erick Gallun - VA National Center for Rehabilitative Audiological Research, Dr. Dan Tollin - University of Colorado, Dr. Ryan McCreery - Boys Town National Research Hospital,
Dr. Poppy Crum -
Dolby Laboratories

This panel discussion will feature investigators in hearing science and audiology, including experts in binaural hearing, audiological assessment and rehabilitation, next-generation hearing aids, and auditory cognitive neuroscience. Brief presentations will highlight the current and future impacts of hearing science on AVAR–e.g., the evolution of binaural hearing aids as spatially intelligent devices, lessons from auditory scene analysis, and brain-directed signal processing. Applications of AVAR technology to hearing science and the audiology clinic will also be presented, e.g. the use of immersive VR to diagnose and retrain spatial hearing deficits, and the benefits of using binaural devices to study hearing “in the wild.”

6:15 PM

Break

 

6:30 -

7:15 PM

Tutorial 6:
VR Audio - The Convergence of Sound Professions

 Presented by Christopher Hegstrom - Symmetry Audio

At the very least, VR audio is an exiting new paradigm for audio professionals to learn & at most it is a convergence of all the preceding subcategories of audio professions.
Built with game engines, using film cinematography on a mobile platform with the presence of theater & live streaming inspired by broadcast, it will take all of our combined knowledge to pull off convincing VR. Audio is the glue that binds all of these sub-genres.
This talk will identify what we can apply to VR audio from each of these proficiencies, what we can learn from other VR system technology (such as cameras or haptics) and how audio can inspire other professions with our standardization and collaboration.

AES - Audio Engineering Society