145th AES CONVENTION Game Audio & XR Track Event Details

AES New York 2018
Game Audio & XR Track Event Details

Wednesday, October 17, 10:15 am — 11:30 am (1E08)

Game Audio & XR: GA01 - Practical Recording Techniques for Live Music Production in 6DOF VR

Cal Armstrong, University of York - York, UK
Gavin Kearney, University of York - York, UK
David Rivas Méndez, University of York - York, UK
Hashim Riaz, Abbey Road Studios - London, UK
Mirek Stiles, Abbey Road Studios - London, UK

As virtual and augmented reality technologies move towards systems that can deliver full six degrees of freedom (6DOF), it follows that good strategies must be employed to create effective 6DOF audio capture. In a musical context, this means that if we record an ensemble then we must give the end user the potential to move close and even around audio sources with a high degree of plausibility to match the visuals. This workshop looks at recording strategies that enable 3DOF/3DOF+ and 6DOF for live music performances.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games


Wednesday, October 17, 10:45 am — 12:15 pm (1E17 (Surround Rm))

Immersive & Spatial Audio: IS01 - Spatial Audio-Video Creations for Music, Virtual Reality and 3D Productions – Case Studies

Tomasz Zernicki, Zylia sp. z o.o. - Poznan, Poland
Florian Grond, McGill University - Montreal, Canada
Yao Wang, ICTUS - Boston, MA, USA
Edward Wersocki, Northeastern University - Boston, MA, USA

The goal of the workshop is to present spatial audio-video creations in practice. Professional audio engineers and musicians will talk about their 360, 3D and ambient productions combining the sound and the vision. Among discussed projects there are going to be “Unraveled”—360 spatial experience where the listener finds themselves in the middle of the entire recording. Furthermore, the speakers will tell about the process of making 3D audiovisual footage displayed in the 360° dome as well as spatial recordings of the concert music. The workshop will focus especially on the usage of spherical microphone arrays which enable to record entire 3D sound scene. The separation of individual sound sources in post-production and Ambisonics give creators unlimited possibilities to achieve unique audio effects.


Wednesday, October 17, 11:45 am — 12:15 pm (1E08)


Game Audio & XR: GA02 - Live Coding Tutorial: Real-Time Open Sound Communication from Unreal Engine 4 to Max 7

Timothy Vallier, Orion Healthcare Technology - Omaha, NE, USA

Developers often face challenges when attempting to integrate visual based virtual reality environments and auditory based virtual auditory environment signal processing. This live coding tutorial session will demonstrate the ease of bridging two disparate environments vis Open Sound Control (OSC). Alone, these two environments offer a great deal of control and complexity for their respective domains (Max in Audio and UE4 in Visual). Together, they offer complete control and synchrony for virtual reality applications requiring multichannel virtual auditory environments. This tutorial will demonstrate a from-scratch approach to getting the two platforms set up to communicate with one another.


Wednesday, October 17, 4:15 pm — 5:45 pm (1E08)

Immersive & Spatial Audio: IS02 - Delivering Interactive Experiences Using HOA through MPEG-H

Patrick Flanagan, THX Ltd. - San Francisco, CA, USA
Stephen Barton, Afterlight Inc.
Simon Calle, THX Ltd.
Nick Laviers, Respawn Entertainment
Aaron McLeran, Epic Games
Nils Peters, Qualcomm, Advanced Tech R&D - San Diego, CA, USA

A panel discussion about HOA content creation and how HOA should transform the way we produce audio for all types of media. Discussion includes DAW’s for creating HOA, MPEG-H file compression for delivery of up to 6th order Ambisonics, Broadcasting tools and possibilities, and how HOA and MPEG-H are gaining traction in the world.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games


Wednesday, October 17, 4:15 pm — 5:45 pm (1E06 (Immersive/PMC Rm))


Recording & Production: RP04 - The WoW Factor

Jim Anderson, Anderson Audio NY - New York, NY, USA; Clive Davis Institute of Recorded Music, New York University - New York, NY, USA

What is Wow? Who has Wow? Where is Wow? Why is Wow needed? When can I get Wow? How can I get Wow?
Over one hundred years ago, audiences experienced “Wow” listening to a singer and comparing their sound with a recording. Observers at the time found that it was “almost impossible to tell the difference” between what was live sound and what was recorded. Sixty years ago, the transition from monaural sound to stereophonic brought “realism” into listener’s homes and today audiences can be immersed in sound. This talk will trace a history of how listeners have been educated and entertained to the latest sonic developments and said to themselves and each other: “Wow!”


Thursday, October 18, 9:00 am — 10:00 am (1E08)


Game Audio & XR: GA03 - Anatomy of Great Voice Over: A Casting & Recording Primer

Andrea Toyias, Blizzard Entertainment - Irvine, CA, USA

Game dialogue is one of the final ingredients that breathes life into a video game. The story flows, the gameplay engages, and the characters come to life through memorable VO performances. . . thus, bringing depth and immersion to the overall experiences. Andrea Toyias will bring her 10 years as Head of the VO department at Blizzard Entertainment to give you tools, tips, and insights on how to best create the vocal performances you are after. Topics will include: how to fully flesh out characters specs when casting; what to listen for when auditioning; best ways to prep for a recording session; and how to successfully work with voice actors in order to better create, collaborate and experiment with them so you can bring your vision to life in new and exciting ways. The relationship between game team, director, and talent will be broken down, examined, and explored in its purest form.


Thursday, October 18, 10:15 am — 11:45 am (1E13)

Sound Reinforcement: SR04 - Designing for Broadway: The Band's Visit

Kai Harada

Among the many Tony Awards won by "The Band's Visit" was the Tony for Best Sound for a Musical. Kai Harada and his team will discuss the process of bringing this production and others to the stage.


Thursday, October 18, 10:15 am — 11:15 am (1E08)

Game Audio & XR: GA04 - Games v. Cinema: Grudge Match

Steve Martz, THX Ltd. - San Rafael, CA, USA
Lydia Andrew, Ubisoft - Quebec City, Canada
Jason Kanter, Audio Director, Avalanche Studios - New York, NY, USA
Harold Kilianski, Fanshawe College - MIA - London, ON, Canada
John Whynot, Berklee College of Music - Los Angeles, CA, USA

Game audio and audio for cinema. Two worlds that create epic soundscapes. How much are they similar or different? Do they share the same tools, design plan and challenges, or are they completely individual? Join this session to hear from four leaders in the industry, two from cinema and two from games, as they discuss audio design for their fields. Learn how sound design, dialog, and FX strategies differs between the two realms and how they sometimes even work together.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games


Thursday, October 18, 10:15 am — 11:15 am (1E21)

Recording & Production: RP08 - Space, Place, and Bass: Providing Modern Metal Music with an Appropriate Balance between Heaviness and Clarity

Steven Fenton, University of Huddersfield - Huddersfield, West Yorkshire, UK
Mark Mynett, University of Huddersfield - Huddersfield, UK

Distinct challenges are posed when providing the various sounds/performances of a modern metal mix with appropriate space, place, and bass. This is especially the case when down-tuning is combined with fast performance subdivisions and ensemble rhythmic synchronization.
This workshop covers intermediate-to-advanced “space, place and bass” mix principles that afford this production style an appropriate balance between heaviness and clarity, including: frequency bracketed kick and bass approaches; anti-masking mix theory (with a focus on different “designs”); dynamic EQ and multi-band compression use; series and parallel dynamics approaches; and time-based processing principles.
Mark Mynett, who lectures in Music Technology and Production at Huddersfield University, is a record producer and author of Metal Music Manual, the world’s first book on producing/engineering/mixing and mastering contemporary heavy music.


Thursday, October 18, 11:30 am — 12:30 pm (1E17 (Surround Rm))

Game Audio & XR: GA05 - A Systemic Approach to Interactive Dialogues on Assassins Creed Odyssey—From Speech to SFX to Music

Lydia Andrew, Ubisoft - Quebec City, Canada
Greig Newby, Ubisoft - Quebec, Canada

From the beginning of Assassin’s Creed Odyssey, we recognized that this rich, continually unfolding open world game demanded more than the traditional manual, minute-by-minute approach to audio design, integration and mixing. The dual protagonists, the interactive dialogues, and the massive scale meant we needed to build systems that were both responsive to the complexity of our game world and to the individuality of our players’ choices.

The presentation will cover this systemic approach, showing how we created and used tools and pipelines to support our player’s freedom of choice. We will talk about the complexity of constructing, recording, and integrating the voice into the interactive dialogue system, focusing on the new tools and pipelines we developed. We will show how music is used in the interactive dialogues to support character, emotion, and player choice. We will talk about how we aimed to maintain the consistency of the player experience with Foley, sfx, and ambiences by seamlessly moving in and out of the interactive dialogues. Finally, we will discuss how we brought all these elements together through systems that were the friend not the enemy of creativity.

The attendees will walk away with an understanding of the potential challenges of implementing a branching interactive dialog system in an open world game and some insights on how to transform their traditional linear pipelines.


Thursday, October 18, 12:30 pm — 1:30 pm (1E06 (Immersive/PMC Rm))


Game Audio & XR: GA06 - Shadow of the Tomb Raider: A Case Study Dolby Atmos Video Game Mix

Rob Bridgett, Eidos Montreal - Montreal, Canada

Shadow of the Tomb Raider was developed at Eidos Montreal over a three year period and had its final mix at Pinewood Studios in the UK over a two week period. The game was mixed entirely in Dolby Atmos for Home Theatre and was one of the first console games to author specific height-based, 3D-sound specifically for this exciting new surround format.

Audio Director, Rob Bridgett, will cover all aspects of bringing this mix to fruition, from planning to execution, in this fascinating post-mortem. Highlights include: • Mix philosophy overview for a blockbuster AAA action title. • Unexpected side-effects of height-based surround. • Critical tools and techniques for surround and overhead-based mixing. • Implementing loudness guidelines. • Differences and benefits of Atmos and object-based surround sound systems for games, over and above those of movies. • Middleware and live-tuning workflow examples and descriptions. • Mix team composition and roles.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games


Thursday, October 18, 5:00 pm — 6:00 pm (1E17 (Surround Rm))

Immersive & Spatial Audio: IS07 - Virtual Reality Audio: B-Format Processing

Christof Faller, Illusonic GmbH - Uster, Zürich, Switzerland; EPFL - Lausanne, Switzerland

B-Format has had a revival in recent years and has established itself as the audio format of choice for VR videos and content. Experts in signal processing and production tools are presenting and discussing latest innovations in B-Format processing. This includes processing on the recording and rendering side and B-Format post-production.

AES Technical Council This session is presented in association with the AES Technical Committee on Spatial Audio


Thursday, October 18, 5:15 pm — 5:45 pm (1E06 (Immersive/PMC Rm))


Audio for Cinema: AC04 - The 5th Element – How a Sci-Fi Classic Sounds with a New 3D Audio Mix

Tom Ammermann, New Audio Technology GmbH - Hamburg, Germany

The 5th Element—it’s certainly a milestone in Sci-Fi film history. Recently it was completely overworked doing a completely new film scan in 4k and mixing the whole audio elements again in Dolby Atmos and Headphone Surround 3D. This version was released in Germany as UHD Blu-ray and offers a fantastic new adventure of this great production from Luc Besson. The session offers listening examples and inside information of the production.


Thursday, October 18, 5:15 pm — 5:45 pm (1E08)

Game Audio & XR: GA07 - Designing Game Audio Plugins

Kris Daniel, McDSP - Media, PA, USA

With the growth of game audio middleware, plugin developers face new engineering challenges to produce plugins that can be deployed to multiple hosting situations. This session will explore how designing for multiple platforms upfront can save time during porting, by analyzing the fundamental pieces of a plugin and learning how those map to multiple game audio environments. Other upfront design considerations such as choice of language, repository organization, and processing strategies will be discussed, as well as pitfalls encountered from the trenches of development.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games


Friday, October 19, 10:15 am — 11:15 am (1E08)

Immersive & Spatial Audio: IS11 - The Audio Edge: Ambient Computing, Artificial Intelligence and Machine Learning with Sound

Sally Kellaway, Microsoft - Seattle, WA, USA
George Valavanis, Microsoft - Seattle, WA, USA

Audio professionals understand that sound is a powerful signal for capturing and conveying information about the world. From sound designers to composers, we use the communicative capacity of sound to tell stories. Advancements n Artificial Intelligence and Machine Learning are introducing new ways to process sound as data to better understand our environment and expand our awareness.

Presenting findings from Microsoft's Mixed Reality at Work development team, George Valavanis and Sally Kellaway discuss audio's role in our cloud-connected future. From ML data capture workflows to the Microsoft Azure and Dynamics 365 tools used to develop data insights, we'll uncover how audio will expand the way we interact with our world, define a new class of hardware technologies, and become the data stream of the future.


Friday, October 19, 11:30 am — 12:30 pm (1E08)

Game Audio & XR: GA08 - Writing a New Audio Engine for UE4: Innovation Under Pressure

Aaron McLeran, Epic Games

In this talk I will describe the technical design challenges and opportunities inherent in writing a new next-gen-capable audio engine for Unreal Engine 4 (UE4), a widely licensed game engine. I will present the prior state of the audio engine and analyze technical constraints and issues which guided API design choices and feature prioritization. I will then describe challenges we faced while testing for quality, stability and correctness and the process of launching a new audio engine within a regular release schedule of UE4 engine updates while not breaking licensees and backward compatibility. Finally, I will discuss the launch of the audio engine on Fortnite on 6 platforms without slowing down audio content production or interrupting an ambitious 2-week release cadence.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games and AES Technical Committee on Spatial Audio and AES Technical Committee on Audio for Cinema


Friday, October 19, 1:45 pm — 2:45 pm (1E17 (Surround Rm))

Game Audio & XR: GA09 - Just Cause 4: Guns and Music and Mix . . . Oh My!

Ronny Mraz, Avalanche Studios - New York, NY, USA
Dominic Vega, Senior Sound Designer, Avalanche Studios - New York City, NY USA

The Just Cause franchise has always presented players one of the largest free-roaming environments in the open world action genre of games and Just Cause 4 is no exception. During this presentation we'll cover the music, weapons, and mix of JC4 with an in depth look at the approach taken for design and implementation of these systems.


Friday, October 19, 3:00 pm — 4:00 pm (1E17 (Surround Rm))


Game Audio & XR: GA10 - Wwise Spatial Audio: A Practical Approach to Virtual Acoustics

Nathan Harris, Audiokinetic - Montreal, QC, Canada

Wwise Spatial Audio is becoming increasingly advanced and now allows for real-time modeling of acoustic phenomena including reflection, diffraction, and sound propagation by informing the sound engine about 3D geometry in the game or simulation.
In this workshop, Nathan Harris, a software developer on the Audiokinetic research and development team, will give a overview of the technology behind Wise Spatial Audio. He will demonstrate how reflection, diffraction, and sound propagation is simulated and how the Wwise authoring tool can be used to monitor and to enable creative intervention when desired. Using the Wwise Audio Lab, a “sandbox” for experimentation, Nathan will walk through a live listening demonstration.


Friday, October 19, 3:30 pm — 4:30 pm (1E08)

Game Audio & XR: GA11 - Microtalks: (Listening) Into the Future

Sally Kellaway, Microsoft - Seattle, WA, USA
Jean-Pascal Beaudoin, Headspace Studio - Montreal, QC, Canada; Felix & Paul Studios - Santa Monica, CA, USA
Linda A. Gedemer, Source Sound VR - Woodland Hills, CA USA; University of Salford - Salford, UK
Sadah Espii Proctor, Espii Studios - Brooklyn, NY, USA
Margaret Schedel, Stony Brook University - Stony Brook, NY, USA
George Valavanis, Microsoft - Seattle, WA, USA

The Audio industries have been in a perpetual state of technological revolution since their inception, making for a volatile, interesting and fast paced environment that have left many and much trailing in its dust. With the current exploration of Games, Virtual, Augmented and Mixed Reality, Artificial Intelligence continuing full-steam-ahead, how do we fit in this new world, and how important can we make audio? Our 5 speakers have 8 minutes to explore what exists in the future for Audio (or the future of listening).

The microtalk panel format is a panel of 5 speakers, each speaking for exactly 8 minutes, with 24 slides auto-advancing every 20 seconds. This session is designed to explore the space around important audio industry topics, speakers aim to provoke and challenge standards of topic, thought and presentation.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games


Friday, October 19, 4:30 pm — 5:45 pm (1E09)

Product Development: PD14 - Audio Source Separation—Recent Advancement, Applications, and Evaluation

Chungeun Kim, University of Surrey - Guildford, Surrey, UK; Sony Interactive Entertainment Europe - London, UK
Jon Francombe, BBC Research and Development - Salford, UK
Nima Mesgarani, Columbia University - New York, NY, USA
Bryan Pardo, Northwestern University - Evanston, IL, USA

Audio source separation is one of the signal processing techniques inspired by the humans' corresponding cognitive ability in auditory scene analysis. It has a wide range of applications including speech enhancement, sound event detection, and repurposing. Although initially it used to be only possible to separate the sources in very specific capturing configurations, only to a suboptimal level of quality, advanced signal processing techniques, particularly deep learning-related approaches, have both widened the applicability and enhanced the performance of source separation. In this workshop the recent advancement of source separation techniques in various use-cases will be discussed, along with the challenges that the research community is currently facing. Also, research activities on the quality aspects specific to source separation towards effective performance evaluation will be introduced.

AES Technical Council This session is presented in association with the AES Technical Committee on Semantic Audio Analysis


Friday, October 19, 4:45 pm — 6:00 pm (1E08)

Game Audio & XR: GA12 - Omni-Directional: Sound Career Paths in VR/AR

Chris Burke, DamianL, Inc. - Brooklyn, NY, USA
Jeanine Cowen, Berklee College of Music - Boston, MA, USA
Aaron McLeran, Epic Games
Andrew Sheron, Freelance Composer/Engineer - New York, NY, USA
Michael Sweet, Berklee College of Music - Boston, MA, USA

After fits and starts in the film, gaming, home cinema, and cellular industries, VR and AR are no longer in search of a reason for being. Great immersive visuals demand great immersive audio and the speed at which standards are being hammered out would make Alan Blumlein’s head spin. While a final interoperability spec is still a ways off, the object-oriented nature of ambisonics means that producers can create now for the systems of the future. This is already having a huge effect on the industry with new career paths emerging in sound design, coding, systems design, and more. Jump in the bitstream and learn everything you need to know with our panel of producers and tool developers, and get ready for your new career in sound for immersive media!

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games


Saturday, October 20, 9:30 am — 10:30 am (1E08)

Game Audio & XR: GA13 - Get Inside the Music: Connecting Artists and Audiences through Interactive Music Apps

Justin Paterson, London College of Music, University of West London - London, UK
Rob Toulson, University of Westminster - London, UK

Commercial artists are nowadays empowered to invite listeners deeper inside their music through interactive music apps incorporating gameplay, stem mixers, alternative content, and production/remix features. Bjork, Peter Gabriel, and Massive Attack, for example, have all launched interactive music albums in recent years. The tutorial presenters have also been at the forefront of this movement, collaborating with record labels and music artists to evaluate the commercial and creative opportunities of interactive music.
This tutorial will present a history of interactive music and will demonstrate future audience listening-experiences. Relevant mobile programming interfaces will be discussed alongside the music production workflow for developing interactive music applications. This insight is intended to inspire developers and students to consider their own ideas for engaging listeners more actively with recorded music.

AES Technical Council This session is presented in association with the AES Technical Committee on Recording Technology and Practices


Saturday, October 20, 10:15 am — 11:15 am (1E10)

Game Audio & XR: GA14 - The Stanford Virtual Heart

Daniel Deboy, DELTA Soundworks - Germany
Ana Monte, DELTA Soundworks - Germany

Pediatric cardiologists at Lucile Packard Children's Hospital Stanford are using immersive virtual reality technology to explain complex congenital heart defects, which are some of the most difficult medical conditions to teach and understand. The Stanford Virtual Heart experience helps families understand their child’s heart conditions. For medical trainees, it provides an immersive and engaging new way to learn about the most common and complex congenital heart anomalies. The panelists will give an insight about the challenges for the sound design with a scientific approach and how it was integrated in Unity.


Saturday, October 20, 10:45 am — 1:00 pm (1E06 (Immersive/PMC Rm))

Recording & Production: RP22 - Planning for On-Location Audio Recording and Production (including Surround)

Alex Kosiorek, Central Sound at Arizona PBS - Phoenix, AZ, USA
Steve Remote, Aura-Sonic Ltd. - NYC, NY USA
Corey Schreppel, Minnesota Public Radio|American Public Media
George Wellington, New York Public Radio - New York, NY, USA
Eric Xu, Central Sound at Arizona PBS - Phoenix, AZ, USA

Artists and engineers are recording more productions on-location or in locations other than the studio. Whether it’s audio for spoken word, chorus, small chamber ensembles to large symphony orchestra, or complex jazz/pop/rock shows involving splits from FOH, pre-production is a critical aspect of any remote recording. Moreover, with new forms of immersive delivery on the horizon, surround production is now part of this equation. Some of the challenges for on-location include maintaining a consistent aesthetic across productions, varying venue acoustics, discretion of microphone placement, monitoring, and redundant backup systems. In this workshop, today’s working professionals will give relatable and practical methods of tackling production for mobile/on-location events and discuss how it differs from studio recording. Such topics include venue scoping, bids and quotes, stage plots, communication with venue or ensemble production managers, talent coordination, and other logistics. Some audio examples (including those in surround) will be included.

AES Technical Council This session is presented in association with the AES Technical Committee on Recording Technology and Practices


Saturday, October 20, 11:30 am — 12:30 pm (1E10)


Immersive & Spatial Audio: IS14 - 3D Audio Philosophies & Techniques for Commercial Music

Bt Gibbs, Skyline Entertainment and Publishing - Morgan Hill, CA, USA; Tool Shed Studios - Morgan Hill, CA, USA

As 3D Audio (360 Spatial) grows, the majority of content remains in the animated VR world. Commercial audio (in all genres) continues to be delivered across streaming and download platforms in L+R stereo audio. With the binaural delivery options for spatial audio rapidly improving, commercial audio options are being underserved. The ability for commercial artists to deliver studio quality audio (if not MQA) to consumers with an "in-the-studio" experience is at hand. This presentation will demonstrate studio sessions delivered in 360 video and 360 audio, which was simultaneously captured for standard stereo delivery through traditional streaming and download sites. All of this being delivered in a simultaneous, and rapid, turn around from pre-production to final masters delivered on both 360 and stereo platforms.


Saturday, October 20, 11:30 am — 12:30 pm (1E21)


Student / Career: SC15 - Audio Effects in Sound Design 101

Brecht De Man, Birmingham City University - Birmingham, UK

Audio effects are the bread and butter of the audio engineer and offer endless creative opportunities to enrich (or spoil) music. But their use is equally relevant in other linear and interactive media, where significant processing is often needed before sources fit the sonic environment or artistic vision. A sound designer can have several tasks within the context of a single production, such as making overdubbed or synthetic sources convincing, making reality more interesting than it is, conveying emotional state, accounting for auditory perception and system limits, and making things sound imaginary, virtual, or magical.
This tutorial can be useful for novices and inspirational for pros, covering fundamentals and taxonomy and showing how these different goals can be achieved with a basic set of processors.


Saturday, October 20, 1:45 pm — 2:45 pm (1E10)

Game Audio & XR: GA15 - Mixing in VR

Daniel Deboy, DELTA Soundworks - Germany
Christian Sander, Dear Reality GmbH - Düsseldorf, Germany

Mixing audio for Virtual Reality (VR) on 2D Displays can be a frustrating job. We present a new workflow that enables the engineer to mix object based audio directly in VR without leaving the HMD. Starting with an overview of Spatial Audio workflows from recording, editing, mixing, platforms and playback, we’ll be demoing mixing in VR live on stage.


Return to Game Audio & XR Track Events