144th AES CONVENTION Game Audio & AR/VR Track Event Details

AES Milan 2018
Game Audio & AR/VR Track Event Details

Wednesday, May 23, 09:15 — 10:15 (Scala 2)

Photo

Tutorial: T01 - Crash Course in 3D Audio

Presenter:
Nuno Fonseca, Polytechnic Institute of Leiria - Leiria, Portugal; Sound Particles

A little confused with all the new 3D formats out there? Although most 3D audio concepts already exist for decades, the interest in 3D audio has increased in recent years, with the new immersive formats for cinema or the rebirth of Virtual Reality (VR). This tutorial will present the most common 3D audio concepts, formats, and technologies allowing you to finally understand buzzwords like Ambisonics/HOA, Binaural, HRTF/HRIR, channel-based audio, object-based audio, Dolby Atmos, among others.

 
 

Wednesday, May 23, 10:00 — 11:30 (Lobby)

Workshop: W02 - Virtual Reality Audio: B-Format Processing

Chair:
Christof Faller, Illusonic GmbH - Uster, Zürich, Switzerland; EPFL - Lausanne, Switzerland
Panelists:
Svein Berge, Harpex Ltd - Berlin, Germany
Ben Sangbae Chon, Gaudio Lab, Inc. - Seoul, Korea
Ville Pulkki, Aalto University - Espoo, Finland
Oliver Thiergart, International Audio Laboratories Erlangen - Erlangen, Germany

B-Format has had a revival in recent years and has established itself as the audio format of choice for VR videos and content. Experts in signal processing and production tools are presenting and discussing latest innovations in B-Format processing. This includes processing on the recording and rendering side and B-Format post-production.

 
 

Wednesday, May 23, 15:00 — 16:30 (Lobby)

Photo

Tutorial: T06 - Psychoacoustics of 3D Sound Recording (with 9.1 Demos)

Presenter:
Hyunkook Lee, University of Huddersfield - Huddersfield, UK

3D surround audio formats aim to produce an immersive soundfield in reproduction utilizing elevated loudspeakers. In order to use the added height channels most optimally in sound recording and reproduction, it is necessary to understand the psychoacoustic mechanisms of vertical stereophonic perception. From this background, this tutorial/demo session aims to provide an overview of important psychoacoustic principles that recording engineers and spatial audio researchers need to consider when making a 3D recording using a microphone array. Various microphone array techniques and practical workflows for 3D sound capture will also be introduced and their pros and cons will be discussed. This session will play demos of various 9.1 sound recordings, including the recent Auro-3D and Dolby Atmos release for the Siglo de Oro choir.

 
 

Thursday, May 24, 09:30 — 10:30 (Scala 1)

Tutorial: T10 - From Seeing to Hearing: Sound Design and Spatialization for Visually Impaired Film Audiences

Presenters:
Gavin Kearney, University of York - York, UK
Mariana Lopez, University of York - York, UK

This tutorial presents the concepts, processes, and results linked to the Enhancing Audio Description project (funded by AHRC, UK), which seeks to provide accessible audio-visual experiences to visually impaired audiences using sound design techniques and spatialization. Film grammars have been developed throughout film history, but such languages have matured with sighted audiences in mind and assuming that seeing is more important than hearing. We will challenge such assumptions by demonstrating how sound effects, first person narration as well as breaking the rules of sound mixing, can allow us to create accessible versions of films that are true to the filmmaker's conception. We will also discuss how the guidelines developed have been applied to the higher education context to train filmmakers on the importance of sound.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema and AES Technical Committee on Spatial Audio

 
 

Thursday, May 24, 12:30 — 13:15 (Lobby)

Photo

Tutorial: T14 - Music Is the Universal Language

Presenter:
Fei Yu, Dream Studios - China; U.S

Music supervisor and music producer Fei Yu works on Chinese video games and movies for the last 8 years. During the work collaborating with Western composers, she has her unique perspective on this new form of international collaboration.

This event will discuss the critically acclaimed score to NetEase Games’ "Revelation," the popular Chinese fantasy MMORPG that will soon be released internationally. As one of the first Chinese games to be scored by a westerner for the emerging Chinese video game market, we will show, by using musical examples, a way to work on this Chinese project. We will also talk about the recording techniques for projects like the movie Born In China that uses Chinese instruments—how to balance the sound, how to communicate with composer, etc.

 
 

Thursday, May 24, 13:15 — 14:30 (Arena 3 & 4)

Workshop: W13 - Object-Based Audio Broadcasting: Practical Aspects

Chair:
Matthieu Parmentier, francetélévisions - Paris, France
Panelists:
Dominique Brulhart, Merging Technologies - Puidoux, Switzerland
Rupert Brun, Fraunhofer IIS - Erlangen, Germany
Christophe Chabanne, Dolby Laboratories - Valbonne, France
Michael Kratschmer, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Scott Norcross, Dolby Laboratories - San Francisco, CA, USA

This workshop will offer a state of the art of object-based audio for live broadcasting and post-production. The discussion will underline the role of the Audio Definition Model as an open format for production exchange.

 
 

Thursday, May 24, 16:45 — 17:45 (Scala 1)

Photo

Tutorial: T16 - Audio Localization Method for VR Application

Presenter:
Joo Won Park, Columbia University - New York, NY, USA

Audio localization is a crucial component in the Virtual Reality (VR) projects as it contributes to a more realistic VR experience to the users. In this event a method to implement localized audio that is synced with user’s head movement is discussed. The goal is to process an audio signal real-time to represent three-dimensional soundscape. This tutorial introduces a mathematical concept, acoustic models, and audio processing that can be applied for general VR audio development. It also provides a detailed overview of an Oculus Rift- MAX/MSP demo that conceptualizes the techniques.

 
 

Friday, May 25, 09:15 — 10:15 (Scala 1)

Workshop: W20 - Stereophonic Techniques for VR and 360º Content

Presenters:
Hannes Dieterle, SCHOEPS Mikrofone GmbH - Karlsruhe, Germany
Kacper Sagnowski, SCHOEPS Mikrofone GmbH - Karlsruhe, Germany
Helmut Wittek, SCHOEPS GmbH - Karlsruhe, Germany

In head-tracked binaural audio, the two-channel output is produced by a real-time convolution of a limited number of sources with HRTFs. There are elaborate systems that measure the HRTFs of each individual source. More general solutions define a grid of sources on a sphere. An arbitrary source is then mapped to this grid of sources based on the (higher-order) Ambisonics principle. With this method-utilized by many state-of-the-art binauralizers-it is possible to binauralize stereophonic virtual loudspeakers without performance problems. The workshop will give examples of stereophonic techniques for VR production and will show a workflow suggestion using practical examples. The advantages of using conventional stereo microphone arrays instead of first-order Ambisonics microphones will be shown.

 
 

Friday, May 25, 10:30 — 12:00 (Scala 1)

Tutorial: T18 - The Sound Will Take a Lead in VR

Presenters:
Daniel Deboy, DELTA Soundworks - Germany
Ana Monte, DELTA Soundworks - Germany
Szymon Aleksander Piotrowski, Psychosound Studio - Kraków, Poland
Michal Sokolowski, Slavic Legacy VR - Warsaw, Poland
Paulina Shepanic, Slavic Legacy VR - Warsaw, Poland

This session focuses on details of the work on two games in VR. This innovative platform introduces not only new possibilities and perspectives but also new challenges and questions.

Game Game ““Slavic Legacy VR” - First and the largest VR game based on mysterious Slavic mythology and culture. Through virtual reality, the player immerses himself into a dark story in Eastern Europe. From the very beginning, he has to sneak and hide from a deadly threat and its consequences. The unique fairytale atmosphere and uncommon in VR, award-winning graphics, is a mixture of realism and artistic vision of the creators. All of the tools, elements of the environment and encountered mythical beasts were created based on ethnographic research so that they faithfully reflect the life and beliefs of the then people. The game also confronts the problem of motion sickness with a completely new way of moving and designing the locations. These and many other features make Slavic Legacy a valued, one-of-a-kind and immersive production for the educational and entertainment market. Authors will present a gameplay with binaural playback and discuss technical, perception, and esthetic aspects of production process and choices. Panelists will also bring up challenges of sounds implementation in Unreal Engine.


Game “The Stanford Virtual Heart”
Pediatric cardiologists at Lucile Packard Children's Hospital Stanford are using immersive virtual reality technology to explain complex congenital heart defects, which are some of the most difficult medical conditions to teach and understand. The Stanford Virtual Heart experience helps families understand their child’s heart conditions by employing a new kind of interactive visualization that goes far beyond diagrams, plastic models and hand-drawn sketches. For medical trainees, it provides an immersive and engaging new way to learn about the two dozen most common and complex congenital heart anomalies by allowing them to inspect and manipulate the affected heart, walk around inside it to see how the blood is flowing, and watch how a particular defect interferes with the heart’s normal function. The panelists will give an insight about the challenges for the sound design and how it was integrated in Unity.

 
 

Friday, May 25, 12:15 — 13:30 (Scala 1)

Workshop: W26 - The EBU ADM Renderer

Co-chairs:
Chris Pike, BBC R&D - Salford, UK; University of York - York, UK
Michael Weitnauer, IRT - Munich, Germany
Panelists:
Nicolas Epain, b<>com
Thomas Nixon, BBC R&D - Salford, UK

This workshop will introduce the use of the Audio Definition Model to store and distribute Object-Based Audio master files to transport channel, object and scene-based contents.
The discussion will underline the role of OBA renderer within the post-production workflow and introduce EAR, the EBU ADM Renderer recently published as an open-source software.

 
 

Friday, May 25, 12:45 — 13:45 (Lobby)

Tutorial: T19 - Supporting the Story with Sound—Audio for Film and Animation

Presenters:
Kris Górski, AudioPlanet - Koleczkowo, Poland; Technology University of Gdansk
Kyle P. Snyder, Ohio University, School of Media Arts & Studies - Athens, OH, USA

Storytelling is a primary goal of film and there's no better way to ruin the story than with bad sound. This session will focus on current workflows, best practices, and techniques for film and animation by breaking down recent projects from the panelists.

Kyle Snyder will present audio from the "Media in Medicine" documentary The Veterans’ Project, a film that seeks to bring civilian physicians and the American public closer to the individual voices of veterans of all ages, as they return to civilian life and seek to live long productive lives after they have served. The Veterans’ Project is a feature-length documentary that weaves together the testimonies of injured or ill combat veterans who navigate the complexities of military, VA, and civilian medical systems in seeking treatment and reintegration into the civilian world.

Rodzina Treflików presented by Kris Górski is a classic, stop motion animation series for children. It uses original solutions for the mouth animation and generally speaking tries to combine the
old and the new techniques of today to make an appealing proposition to the youngest audience. It is meant primarily for airing on children network TV but it is also shown in selected movie theaters. At the present moment we are in the process of finishing the 4th season. The movie's soundtrack is produced in 5.1 and Stereo.

 
 

Friday, May 25, 14:00 — 15:00 (Scala 1)

Photo

Tutorial: T22 - A Practical Use of Spatial Audio for Storytelling, and Different Delivery Platforms

Presenter:
Axel Drioli, 3D Sound Designer and Producer - London, UK / Lille, France

An overview of Spatial Audio's characteristics, strengths, and weaknesses for storytelling. A case study will be analyzed to better understand what can be done at every stage of the spatial audio production process, from recording to decoding to different delivery platforms. This presentation is tailored for an audience of content creators and engineers, who may not know enough (or not at all) the Spatial Audio capabilities for storytelling, using demos and examples to explain and clarify concepts.

 
 

Friday, May 25, 15:15 — 16:15 (Scala 1)

Photo

Tutorial: T24 - 360 & VR - Create OZO Audio and Head Locked Spatial Music in Common DAWs

Presenter:
Tom Ammermann, New Audio Technology GmbH - Hamburg, Germany

VR and 360 are mainly headphone applications. Of course, binaural headphone virtualization, well known since decades, becomes important as never before. Fortunately current binaural virtualization technologies offer a fantastic new experience far away from creepy low quality ‘surround simulations’ in the past. The presentation will show how to create content for NOKIA’s new 360 audio format OZO Audio and how to create head locked binaural music in common DAWs.

 
 

Friday, May 25, 16:30 — 18:00 (Scala 1)

Workshop: W28 - Mixing VR Being in VR

Presenters:
Daniel Deboy, DELTA Soundworks - Germany
Christian Sander, Dear Reality GmbH - Germany

Mixing Virtual Reality (VR) content such as 360° Film can be a frustrating job. Current workflows either show an equirectangular projection of the 360° film or just a small portion of the full view on regular 2D Displays, next to the common DAW environment. A representation of the audio objects as an overlay may help to map the object to the correct visual location, but is not a replacement of viewing the film with a head mounted display (HMD). We present a new workflow that enables the engineer to mix object based audio directly in VR without leaving the HMD.

 
 

Saturday, May 26, 09:30 — 11:00 (Lobby)

Workshop: W29 - Spatial Audio Microphones

Chair:
Helmut Wittek, SCHOEPS GmbH - Karlsruhe, Germany
Panelists:
Gary Elko, mh acoustics - Summit, NJ USA
Johannes Kares, Sennheiser - Vienna, Austria
Hyunkook Lee, University of Huddersfield - Huddersfield, UK
Oliver Thiergart, International Audio Laboratories Erlangen - Erlangen, Germany
Tomasz Zernicki, Zylia sp. z o.o. - Poznan, Poland

Multichannel loudspeaker setups as well as Virtual Reality applications enable Spatial Sound to be reproduced with large resolution. However, on the recording side it is more complicated to gather a large spatial resolution. Various concepts exist in theory and practice for microphone arrays. In this workshop the different concepts are presented by corresponding experts and differences, applications as well as pros and cons are discussed. The different array solutions include coincident and spaced Ambisonics arrays as well as Stereophonic multi-microphone arrays.

AES Technical Council This session is presented in association with the AES Technical Committee on Microphones and Applications

 
 

Saturday, May 26, 10:30 — 12:00 (Arena 3 & 4)

Special Event: SE07 - Immersive Audio for Reality or Virtual Reality—Does it Make a Difference?

Chair:
Daniel Deboy, DELTA Soundworks - Germany
Panelists:
Tom Ammermann, New Audio Technology GmbH - Hamburg, Germany
Felix Andriessens, Ton und Meister - Germany
Ana Monte, DELTA Soundworks - Germany
Tom Parnell, BBC Research & Development - Salford, UK
Martin Rieger, VRTONUNG - Germany
Agnieszka Roginska, New York University - New York, NY, USA
Christian Sander, Dear Reality GmbH - Germany
Christian Vaida, cvmusic film/ton - Germany; au3Dio

In this roundtable discussion we invite experts in the field of immersive audio production for music, cinema, and 360° film content to discuss similarities and differences regarding formats, workflows, and aesthetics. Channel and object based immersive audio systems as well as headphone virtualization using binaural rendering both are trending topics in audio production. With the rise of new applications like virtual reality, 3D audio in cinema, and home entertainment, compatibility for cross-platform distribution is more and more requested, yet engineers often are only familiar with either playback on loudspeaker arrays or binaural rendering. In this event we will try to settle a common ground for all production types and look at the different requirements, both from a technical and aesthetic point of view.

 
 

Saturday, May 26, 11:15 — 12:15 (Lobby)

Photo

Workshop: W31 - Introducing the IEM Plug-in Suite

Presenter:
Daniel Rudrich, Institute of Electronic Music and Acoustics Graz - Graz, Austria

The "IEM Plug-in Suite" is a free and open-source audio plug-in suite created by staff and students at IEM. It features 3D higher-order Ambisonic plug-ins up to 7th order including encoders, decoders, surround visualizer, dynamic processors, and delay and reverb effects, which also allow to auralize directional sources. Moreover, the plug-in suite includes a tool to design Ambisonic decoders for arbitrary loudspeaker layouts employing the AllRAD and imaginary-loudspeaker downmix approach. All implementation is research-driven, and the suite is meant to continuously grow by recent developments, e.g., improvements in binaural Ambisonic rendering. The workshop presents the plug-ins and practically demonstrates the basic applications and workflow in Reaper.

 
 


Return to Game Audio & AR/VR Track Events