145th AES CONVENTION Immersive & Spatial Audio Track Event Details

AES New York 2018
Immersive & Spatial Audio Track Event Details

Wednesday, October 17, 10:15 am — 11:30 am (1E08)

Game Audio & XR: GA01 - Practical Recording Techniques for Live Music Production in 6DOF VR

Co-moderators:
Cal Armstrong, University of York - York, UK
Gavin Kearney, University of York - York, UK
David Rivas Méndez, University of York - York, UK
Panelists:
Hashim Riaz, Abbey Road Studios - London, UK
Mirek Stiles, Abbey Road Studios - London, UK

As virtual and augmented reality technologies move towards systems that can deliver full six degrees of freedom (6DOF), it follows that good strategies must be employed to create effective 6DOF audio capture. In a musical context, this means that if we record an ensemble then we must give the end user the potential to move close and even around audio sources with a high degree of plausibility to match the visuals. This workshop looks at recording strategies that enable 3DOF/3DOF+ and 6DOF for live music performances.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games

 
 

Wednesday, October 17, 10:45 am — 12:15 pm (1E17 (Surround Rm))

Immersive & Spatial Audio: IS01 - Spatial Audio-Video Creations for Music, Virtual Reality and 3D Productions – Case Studies

Moderator:
Tomasz Zernicki, Zylia sp. z o.o. - Poznan, Poland
Panelists:
Florian Grond, McGill University - Montreal, Canada
Yao Wang, ICTUS - Boston, MA, USA
Edward Wersocki, Northeastern University - Boston, MA, USA

The goal of the workshop is to present spatial audio-video creations in practice. Professional audio engineers and musicians will talk about their 360, 3D and ambient productions combining the sound and the vision. Among discussed projects there are going to be “Unraveled”—360 spatial experience where the listener finds themselves in the middle of the entire recording. Furthermore, the speakers will tell about the process of making 3D audiovisual footage displayed in the 360° dome as well as spatial recordings of the concert music. The workshop will focus especially on the usage of spherical microphone arrays which enable to record entire 3D sound scene. The separation of individual sound sources in post-production and Ambisonics give creators unlimited possibilities to achieve unique audio effects.

 
 

Wednesday, October 17, 4:15 pm — 5:45 pm (1E08)

Immersive & Spatial Audio: IS02 - Delivering Interactive Experiences Using HOA through MPEG-H

Chair:
Patrick Flanagan, THX Ltd. - San Francisco, CA, USA
Panelists:
Stephen Barton, Afterlight Inc.
Simon Calle, THX Ltd.
Nick Laviers, Respawn Entertainment
Aaron McLeran, Epic Games
Nils Peters, Qualcomm, Advanced Tech R&D - San Diego, CA, USA

A panel discussion about HOA content creation and how HOA should transform the way we produce audio for all types of media. Discussion includes DAW’s for creating HOA, MPEG-H file compression for delivery of up to 6th order Ambisonics, Broadcasting tools and possibilities, and how HOA and MPEG-H are gaining traction in the world.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games

 
 

Wednesday, October 17, 4:15 pm — 5:45 pm (1E06 (Immersive/PMC Rm))

Photo

Recording & Production: RP04 - The WoW Factor

Presenter:
Jim Anderson, Anderson Audio NY - New York, NY, USA; Clive Davis Institute of Recorded Music, New York University - New York, NY, USA

What is Wow? Who has Wow? Where is Wow? Why is Wow needed? When can I get Wow? How can I get Wow?
Over one hundred years ago, audiences experienced “Wow” listening to a singer and comparing their sound with a recording. Observers at the time found that it was “almost impossible to tell the difference” between what was live sound and what was recorded. Sixty years ago, the transition from monaural sound to stereophonic brought “realism” into listener’s homes and today audiences can be immersed in sound. This talk will trace a history of how listeners have been educated and entertained to the latest sonic developments and said to themselves and each other: “Wow!”

 
 

Thursday, October 18, 9:00 am — 10:00 am (1E17 (Surround Rm))

Immersive & Spatial Audio: IS03 - Workflows and Techniques for Ambisonic Recording and Postproduction

Chair:
Ming-Lun Lee, University of Rochester - Rochester, NY, USA
Panelists:
Olivia Canavan, University of Rochester - Rochester, NY, USA
Steve Philbert, University of Rochester - Rochester, NY, USA

For a 360/3D VR video with head-tracked binaural audio, we have to use Ambisonics to capture the overall sound field. Although some high-end VR cameras have built-in Ambisonic microphones, most of them can only record first-order Ambisonics in the horizontal plane. The sound qualities are also not good in general. To achieving high-quality 3D audio, a recommended solution is to record audio with a separate Ambisonic microphone and then mix with the video during recording or post-editing. With the experience of recording Ambisonics using the em32 Eigenmike microphone array, Zylia ZM-1 microphone, Core Sound Tetramic, and Sennheiser Ambeo VR microphone for many concerts, this workshop aims to offer optimized workflows and practical techniques for Ambisonic recording, conversion, editing, rendering, playback, and publishing.

 
 

Thursday, October 18, 10:15 am — 11:15 am (1E08)

Game Audio & XR: GA04 - Games v. Cinema: Grudge Match

Chair:
Steve Martz, THX Ltd. - San Rafael, CA, USA
Panelists:
Lydia Andrew, Ubisoft - Quebec City, Canada
Jason Kanter, Audio Director, Avalanche Studios - New York, NY, USA
Harold Kilianski, Fanshawe College - MIA - London, ON, Canada
John Whynot, Berklee College of Music - Los Angeles, CA, USA

Game audio and audio for cinema. Two worlds that create epic soundscapes. How much are they similar or different? Do they share the same tools, design plan and challenges, or are they completely individual? Join this session to hear from four leaders in the industry, two from cinema and two from games, as they discuss audio design for their fields. Learn how sound design, dialog, and FX strategies differs between the two realms and how they sometimes even work together.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games

 
 

Thursday, October 18, 10:15 am — 11:15 am (1E17 (Surround Rm))

Immersive & Spatial Audio: IS04 - STAAG Implementation Expanded: A Practical Demo of STAAG Mic Technique and its Use in Stereo, Surround, and Height-Channel Capture for Recording and Broadcast

Presenters:
David Angell, Central Sound at Arizona PBS - Phoenix, AZ, USA
Jamie Tagg, Indiana University - Bloomington, IN, USA; Stagg Sound Services, LLC

While working on location, audio engineers are often challenged by insufficient monitoring, making choices which lead to timbral, wet/dry balance, and stereo image problems. This tutorial examines the use of STAAG (Stereo Technique for Augmented Ambience Gradient) and its established advantages for addressing stereo image, acoustic realism, and flexibility in the mix. While originally optimized for immersive headphone listening, this technique has proven advantageous when upscaling to stereo, surround, and even height-channel loudspeaker systems and stereo/surround broadcast streams. Also, this setup is advantageous when working on location and in live performance scenarios due to its compact arrangement and compatibility with a wide variety of microphones. This tutorial will feature physical configurations and playback of audio examples in a variety of formats.

 
 

Thursday, October 18, 10:15 am — 11:15 am (1E21)

Recording & Production: RP08 - Space, Place, and Bass: Providing Modern Metal Music with an Appropriate Balance between Heaviness and Clarity

Presenters:
Steven Fenton, University of Huddersfield - Huddersfield, West Yorkshire, UK
Mark Mynett, University of Huddersfield - Huddersfield, UK

Distinct challenges are posed when providing the various sounds/performances of a modern metal mix with appropriate space, place, and bass. This is especially the case when down-tuning is combined with fast performance subdivisions and ensemble rhythmic synchronization.
This workshop covers intermediate-to-advanced “space, place and bass” mix principles that afford this production style an appropriate balance between heaviness and clarity, including: frequency bracketed kick and bass approaches; anti-masking mix theory (with a focus on different “designs”); dynamic EQ and multi-band compression use; series and parallel dynamics approaches; and time-based processing principles.
Mark Mynett, who lectures in Music Technology and Production at Huddersfield University, is a record producer and author of Metal Music Manual, the world’s first book on producing/engineering/mixing and mastering contemporary heavy music.

 
 

Thursday, October 18, 11:30 am — 12:30 pm (1E17 (Surround Rm))

Game Audio & XR: GA05 - A Systemic Approach to Interactive Dialogues on Assassins Creed Odyssey—From Speech to SFX to Music

Presenters:
Lydia Andrew, Ubisoft - Quebec City, Canada
Greig Newby, Ubisoft - Quebec, Canada

From the beginning of Assassin’s Creed Odyssey, we recognized that this rich, continually unfolding open world game demanded more than the traditional manual, minute-by-minute approach to audio design, integration and mixing. The dual protagonists, the interactive dialogues, and the massive scale meant we needed to build systems that were both responsive to the complexity of our game world and to the individuality of our players’ choices.

The presentation will cover this systemic approach, showing how we created and used tools and pipelines to support our player’s freedom of choice. We will talk about the complexity of constructing, recording, and integrating the voice into the interactive dialogue system, focusing on the new tools and pipelines we developed. We will show how music is used in the interactive dialogues to support character, emotion, and player choice. We will talk about how we aimed to maintain the consistency of the player experience with Foley, sfx, and ambiences by seamlessly moving in and out of the interactive dialogues. Finally, we will discuss how we brought all these elements together through systems that were the friend not the enemy of creativity.

The attendees will walk away with an understanding of the potential challenges of implementing a branching interactive dialog system in an open world game and some insights on how to transform their traditional linear pipelines.

 
 

Thursday, October 18, 11:30 am — 12:30 pm (1E08)

Immersive & Spatial Audio: IS05 - Measuring Head Related Transfer Functions: Practicalities, Processing and Applications

Presenters:
Cal Armstrong, University of York - York, UK
Gavin Kearney, University of York - York, UK

At the heart of good audio spatial audio reproduction over headphones is the measurement of high quality binaural filters, commonly known as head related transfer functions (HRTFs). This workshop explores the practical challenges involved in the measurement of datasets of HRTFs and the subsequent signal processing required for high quality binaural rendering. We discuss measurement techniques, microphone choices, subject considerations and equalization strategies. We also explore anechoic binaural measurements vs. binaural room impulse responses as well as spatial sampling considerations for applications such as Ambisonic rendering for virtual and augmented reality. Finally we look at what makes a good quality HRTF set—is it the resultant timbre, the sense of externalization or other factors that make people prefer one HRTF dataset over another?

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games

 
 

Thursday, October 18, 12:30 pm — 1:30 pm (1E06 (Immersive/PMC Rm))

Photo

Game Audio & XR: GA06 - Shadow of the Tomb Raider: A Case Study Dolby Atmos Video Game Mix

Presenter:
Rob Bridgett, Eidos Montreal - Montreal, Canada

Shadow of the Tomb Raider was developed at Eidos Montreal over a three year period and had its final mix at Pinewood Studios in the UK over a two week period. The game was mixed entirely in Dolby Atmos for Home Theatre and was one of the first console games to author specific height-based, 3D-sound specifically for this exciting new surround format.

Audio Director, Rob Bridgett, will cover all aspects of bringing this mix to fruition, from planning to execution, in this fascinating post-mortem. Highlights include: • Mix philosophy overview for a blockbuster AAA action title. • Unexpected side-effects of height-based surround. • Critical tools and techniques for surround and overhead-based mixing. • Implementing loudness guidelines. • Differences and benefits of Atmos and object-based surround sound systems for games, over and above those of movies. • Middleware and live-tuning workflow examples and descriptions. • Mix team composition and roles.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games

 
 

Thursday, October 18, 3:00 pm — 5:00 pm (1E08)

Immersive & Spatial Audio: IS06 - Spatial Audio Microphones

Chair:
Helmut Wittek, SCHOEPS Mikrofone GmbH - Karlsruhe, Germany
Panelists:
Gary Elko, mh acoustics - Summit, NJ USA
Brian Glasscock, Sennheiser
Hyunkook Lee, University of Huddersfield - Huddersfield, UK
Len Moskowitz, Core Sound LLC - Teaneck, NJ, USA
Tomasz Zernicki, Zylia sp. z o.o. - Poznan, Poland

Multichannel loudspeaker setups as well as Virtual Reality applications enable Spatial Sound to be reproduced with large resolution. However, on the recording side it is more complicated to gather a large spatial resolution. Various concepts exist in theory and practice for microphone arrays. In this workshop the different concepts are presented by corresponding experts and differences, applications as well as pros and cons are discussed. The different array solutions include coincident and spaced Ambisonics arrays as well as Stereophonic multi-microphone arrays.

AES Technical Council This session is presented in association with the AES Technical Committee on Microphones and Applications

 
 

Thursday, October 18, 5:00 pm — 6:00 pm (1E17 (Surround Rm))

Immersive & Spatial Audio: IS07 - Virtual Reality Audio: B-Format Processing

Chair:
Christof Faller, Illusonic GmbH - Uster, Zürich, Switzerland; EPFL - Lausanne, Switzerland

B-Format has had a revival in recent years and has established itself as the audio format of choice for VR videos and content. Experts in signal processing and production tools are presenting and discussing latest innovations in B-Format processing. This includes processing on the recording and rendering side and B-Format post-production.

AES Technical Council This session is presented in association with the AES Technical Committee on Spatial Audio

 
 

Thursday, October 18, 5:15 pm — 5:45 pm (1E06 (Immersive/PMC Rm))

Photo

Audio for Cinema: AC04 - The 5th Element – How a Sci-Fi Classic Sounds with a New 3D Audio Mix

Presenter:
Tom Ammermann, New Audio Technology GmbH - Hamburg, Germany

The 5th Element—it’s certainly a milestone in Sci-Fi film history. Recently it was completely overworked doing a completely new film scan in 4k and mixing the whole audio elements again in Dolby Atmos and Headphone Surround 3D. This version was released in Germany as UHD Blu-ray and offers a fantastic new adventure of this great production from Luc Besson. The session offers listening examples and inside information of the production.

 
 

Thursday, October 18, 6:00 pm — 7:00 pm (Off-Site 1)

Immersive & Spatial Audio: IS08 - Ozark Henry on the Holodeck: Maps to the Stars

Presenters:
Tom Beyer, New York University - New York, NY, USA
Paul Geluso, New York University - New York, NY, USA
Agnieszka Roginska, New York University - New York, NY, USA

Distributed concert featuring live performances by international gold and platinum Sony Music recording artist Ozark Henry. The concert will include latest immersive sound technologies and internationally distributed musicians, motion captured-driven avatars interacting with live dancers. This is an ongoing exploration into the creative application of the NYU Holodeck including Immersive sound technologies such as Ambisonics, live MPEG-H broadcast, multi-channel immersive sound and visual system. Remote locations include Trondheim, Norway, and Buenos Aires, Argentina. This concert is in collaboration with THX and Qualcomm and will demonstrate the usage of an off-the-shelf Ateme real-time encoder to stream MPEG-H audio and render THX Spatial audio over loudspeakers in a 5.1.4 configuration.

Location: Frederick Loewe Theater, NYU
35 West 4th St

 
 

Friday, October 19, 9:00 am — 10:00 am (1E08)

Photo

Immersive & Spatial Audio: IS09 - Spatial Reproduction on Mobile Devices

Presenter:
Yesenia Lacouture Parodi, HUAWEI Technologies Duesseldorf GmbH - Munich, Germany

Several techniques to convey almost realistic 3D audio scenes through loudspeakers have already exist for decades, though most of them rely on large amount of loudspeakers and of course good sound quality. However, with mobile devices such smart-phones and tablets we have usually access to maximum 2 channels, the location of the speakers is not always optimal and the quality of micro-speakers is far from being what we would call reasonable. In this tutorial we will discuss how it is possible to overcome some of the limitations we encounter when reproducing spatial audio with mobile devices, what kind of applications can benefit from the use of these technologies and what challenges remain for us to solve.

AES Technical Council This session is presented in association with the AES Technical Committee on Spatial Audio

 
 

Friday, October 19, 9:30 am — 12:30 pm (1E12)

Immersive & Spatial Audio: IS10 - Facebook 360 Training

Presenters:
Andres A. Mayo, Andres Mayo Mastering & Audio Post - Buenos Aires, Argentina
Abesh Thakur, Facebook - California, USA

Crash course on the end-to-end workflow for spatial audio design and asset preparation of 360 and 180 immersive videos using the Facebook 360 Spatial Workstation tools. This workshop will go through:

a) What is spatial audio, and why is it important for Immersive videos;
b) Formats and standards that are accepted on popular delivery platforms such as Facebook, Oculus or YouTube;
c) Using plugins in a DAW to author ambisonic mixes that respond to real-time headtracking during runtime;
d) Basic setup for live streaming 360 videos with ambisonic audio;
e) Overview of popular 360 cameras and ambisonic microphones that can be used for linear spatial audio design.

There are two sessions, AM and PM, and space is limited to 35 people in each; this is a ticketed event with a $50 fee for AES members and a $100 fee for non-members. Attendees must preregister for this event and already have a confirmed All Access (not Exhibits Plus) registration for the 145th Convention valid on Friday, October 19th. For additional details and a registration link, please visit the Facebook 360 Training page

 
 

Friday, October 19, 10:15 am — 11:15 am (1E08)

Immersive & Spatial Audio: IS11 - The Audio Edge: Ambient Computing, Artificial Intelligence and Machine Learning with Sound

Presenters:
Sally Kellaway, Microsoft - Seattle, WA, USA
George Valavanis, Microsoft - Seattle, WA, USA

Audio professionals understand that sound is a powerful signal for capturing and conveying information about the world. From sound designers to composers, we use the communicative capacity of sound to tell stories. Advancements n Artificial Intelligence and Machine Learning are introducing new ways to process sound as data to better understand our environment and expand our awareness.

Presenting findings from Microsoft's Mixed Reality at Work development team, George Valavanis and Sally Kellaway discuss audio's role in our cloud-connected future. From ML data capture workflows to the Microsoft Azure and Dynamics 365 tools used to develop data insights, we'll uncover how audio will expand the way we interact with our world, define a new class of hardware technologies, and become the data stream of the future.

 
 

Friday, October 19, 2:00 pm — 5:00 pm (1E12)

Immersive & Spatial Audio: IS12 - Facebook 360 Training

Presenters:
Andres A. Mayo, Andres Mayo Mastering & Audio Post - Buenos Aires, Argentina
Abesh Thakur, Facebook - California, USA

Crash course on the end-to-end workflow for spatial audio design and asset preparation of 360 and 180 immersive videos using the Facebook 360 Spatial Workstation tools. This workshop will go through:

a) What is spatial audio, and why is it important for Immersive videos;
b) Formats and standards that are accepted on popular delivery platforms such as Facebook, Oculus or YouTube;
c) Using plugins in a DAW to author ambisonic mixes that respond to real-time headtracking during runtime;
d) Basic setup for live streaming 360 videos with ambisonic audio;
e) Overview of popular 360 cameras and ambisonic microphones that can be used for linear spatial audio design.

There are two sessions, AM and PM, and space is limited to 35 people in each; this is a ticketed event with a $50 fee for AES members and a $100 fee for non-members. Attendees must preregister for this event and already have a confirmed All Access (not Exhibits Plus) registration for the 145th Convention valid on Friday, October 19th. For additional details and a registration link, please visit the Facebook 360 Training page

 
 

Friday, October 19, 3:00 pm — 4:00 pm (1E17 (Surround Rm))

Photo

Game Audio & XR: GA10 - Wwise Spatial Audio: A Practical Approach to Virtual Acoustics

Presenter:
Nathan Harris, Audiokinetic - Montreal, QC, Canada

Wwise Spatial Audio is becoming increasingly advanced and now allows for real-time modeling of acoustic phenomena including reflection, diffraction, and sound propagation by informing the sound engine about 3D geometry in the game or simulation.
In this workshop, Nathan Harris, a software developer on the Audiokinetic research and development team, will give a overview of the technology behind Wise Spatial Audio. He will demonstrate how reflection, diffraction, and sound propagation is simulated and how the Wwise authoring tool can be used to monitor and to enable creative intervention when desired. Using the Wwise Audio Lab, a “sandbox” for experimentation, Nathan will walk through a live listening demonstration.

 
 

Friday, October 19, 3:30 pm — 4:30 pm (1E08)

Game Audio & XR: GA11 - Microtalks: (Listening) Into the Future

Moderator:
Sally Kellaway, Microsoft - Seattle, WA, USA
Panelists:
Jean-Pascal Beaudoin, Headspace Studio - Montreal, QC, Canada; Felix & Paul Studios - Santa Monica, CA, USA
Linda A. Gedemer, Source Sound VR - Woodland Hills, CA USA; University of Salford - Salford, UK
Sadah Espii Proctor, Espii Studios - Brooklyn, NY, USA
Margaret Schedel, Stony Brook University - Stony Brook, NY, USA
George Valavanis, Microsoft - Seattle, WA, USA

The Audio industries have been in a perpetual state of technological revolution since their inception, making for a volatile, interesting and fast paced environment that have left many and much trailing in its dust. With the current exploration of Games, Virtual, Augmented and Mixed Reality, Artificial Intelligence continuing full-steam-ahead, how do we fit in this new world, and how important can we make audio? Our 5 speakers have 8 minutes to explore what exists in the future for Audio (or the future of listening).

The microtalk panel format is a panel of 5 speakers, each speaking for exactly 8 minutes, with 24 slides auto-advancing every 20 seconds. This session is designed to explore the space around important audio industry topics, speakers aim to provoke and challenge standards of topic, thought and presentation.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games

 
 

Friday, October 19, 4:30 pm — 5:45 pm (1E09)

Product Development: PD14 - Audio Source Separation—Recent Advancement, Applications, and Evaluation

Chair:
Chungeun Kim, University of Surrey - Guildford, Surrey, UK; Sony Interactive Entertainment Europe - London, UK
Panelists:
Jon Francombe, BBC Research and Development - Salford, UK
Nima Mesgarani, Columbia University - New York, NY, USA
Bryan Pardo, Northwestern University - Evanston, IL, USA

Audio source separation is one of the signal processing techniques inspired by the humans' corresponding cognitive ability in auditory scene analysis. It has a wide range of applications including speech enhancement, sound event detection, and repurposing. Although initially it used to be only possible to separate the sources in very specific capturing configurations, only to a suboptimal level of quality, advanced signal processing techniques, particularly deep learning-related approaches, have both widened the applicability and enhanced the performance of source separation. In this workshop the recent advancement of source separation techniques in various use-cases will be discussed, along with the challenges that the research community is currently facing. Also, research activities on the quality aspects specific to source separation towards effective performance evaluation will be introduced.

AES Technical Council This session is presented in association with the AES Technical Committee on Semantic Audio Analysis

 
 

Friday, October 19, 4:45 pm — 6:00 pm (1E08)

Game Audio & XR: GA12 - Omni-Directional: Sound Career Paths in VR/AR

Chair:
Chris Burke, DamianL, Inc. - Brooklyn, NY, USA
Panelists:
Jeanine Cowen, Berklee College of Music - Boston, MA, USA
Aaron McLeran, Epic Games
Andrew Sheron, Freelance Composer/Engineer - New York, NY, USA
Michael Sweet, Berklee College of Music - Boston, MA, USA

After fits and starts in the film, gaming, home cinema, and cellular industries, VR and AR are no longer in search of a reason for being. Great immersive visuals demand great immersive audio and the speed at which standards are being hammered out would make Alan Blumlein’s head spin. While a final interoperability spec is still a ways off, the object-oriented nature of ambisonics means that producers can create now for the systems of the future. This is already having a huge effect on the industry with new career paths emerging in sound design, coding, systems design, and more. Jump in the bitstream and learn everything you need to know with our panel of producers and tool developers, and get ready for your new career in sound for immersive media!

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games

 
 

Saturday, October 20, 9:00 am — 10:00 am (1E10)

Immersive & Spatial Audio: IS13 - Canceled

 
 

Saturday, October 20, 10:15 am — 11:15 am (1E10)

Game Audio & XR: GA14 - The Stanford Virtual Heart

Chair:
Daniel Deboy, DELTA Soundworks - Germany
Panelist:
Ana Monte, DELTA Soundworks - Germany

Pediatric cardiologists at Lucile Packard Children's Hospital Stanford are using immersive virtual reality technology to explain complex congenital heart defects, which are some of the most difficult medical conditions to teach and understand. The Stanford Virtual Heart experience helps families understand their child’s heart conditions. For medical trainees, it provides an immersive and engaging new way to learn about the most common and complex congenital heart anomalies. The panelists will give an insight about the challenges for the sound design with a scientific approach and how it was integrated in Unity.

 
 

Saturday, October 20, 10:45 am — 1:00 pm (1E06 (Immersive/PMC Rm))

Recording & Production: RP22 - Planning for On-Location Audio Recording and Production (including Surround)

Presenters:
Alex Kosiorek, Central Sound at Arizona PBS - Phoenix, AZ, USA
Steve Remote, Aura-Sonic Ltd. - NYC, NY USA
Corey Schreppel, Minnesota Public Radio|American Public Media
George Wellington, New York Public Radio - New York, NY, USA
Eric Xu, Central Sound at Arizona PBS - Phoenix, AZ, USA

Artists and engineers are recording more productions on-location or in locations other than the studio. Whether it’s audio for spoken word, chorus, small chamber ensembles to large symphony orchestra, or complex jazz/pop/rock shows involving splits from FOH, pre-production is a critical aspect of any remote recording. Moreover, with new forms of immersive delivery on the horizon, surround production is now part of this equation. Some of the challenges for on-location include maintaining a consistent aesthetic across productions, varying venue acoustics, discretion of microphone placement, monitoring, and redundant backup systems. In this workshop, today’s working professionals will give relatable and practical methods of tackling production for mobile/on-location events and discuss how it differs from studio recording. Such topics include venue scoping, bids and quotes, stage plots, communication with venue or ensemble production managers, talent coordination, and other logistics. Some audio examples (including those in surround) will be included.

AES Technical Council This session is presented in association with the AES Technical Committee on Recording Technology and Practices

 
 

Saturday, October 20, 11:30 am — 12:30 pm (1E10)

Photo

Immersive & Spatial Audio: IS14 - 3D Audio Philosophies & Techniques for Commercial Music

Presenter:
Bt Gibbs, Skyline Entertainment and Publishing - Morgan Hill, CA, USA; Tool Shed Studios - Morgan Hill, CA, USA

As 3D Audio (360 Spatial) grows, the majority of content remains in the animated VR world. Commercial audio (in all genres) continues to be delivered across streaming and download platforms in L+R stereo audio. With the binaural delivery options for spatial audio rapidly improving, commercial audio options are being underserved. The ability for commercial artists to deliver studio quality audio (if not MQA) to consumers with an "in-the-studio" experience is at hand. This presentation will demonstrate studio sessions delivered in 360 video and 360 audio, which was simultaneously captured for standard stereo delivery through traditional streaming and download sites. All of this being delivered in a simultaneous, and rapid, turn around from pre-production to final masters delivered on both 360 and stereo platforms.

 
 

Saturday, October 20, 11:30 am — 12:30 pm (1E21)

Photo

Student / Career: SC15 - Audio Effects in Sound Design 101

Presenter:
Brecht De Man, Birmingham City University - Birmingham, UK

Audio effects are the bread and butter of the audio engineer and offer endless creative opportunities to enrich (or spoil) music. But their use is equally relevant in other linear and interactive media, where significant processing is often needed before sources fit the sonic environment or artistic vision. A sound designer can have several tasks within the context of a single production, such as making overdubbed or synthetic sources convincing, making reality more interesting than it is, conveying emotional state, accounting for auditory perception and system limits, and making things sound imaginary, virtual, or magical.
This tutorial can be useful for novices and inspirational for pros, covering fundamentals and taxonomy and showing how these different goals can be achieved with a basic set of processors.

 
 

Saturday, October 20, 1:15 pm — 3:15 pm (1E06 (Immersive/PMC Rm))

Immersive & Spatial Audio: IS15 - 3D Audio Acoustic Recording Capture, Dissemination, and Perception

Presenters:
David Bowles, Swineshead Productions LLC - Berkeley, CA, USA
Paul Geluso, New York University - New York, NY, USA
Hyunkook Lee, University of Huddersfield - Huddersfield, UK
Agnieszka Roginska, New York University - New York, NY, USA

A panel discussion on recording techniques for capturing and disseminating 3D acoustic music recordings, with an emphasis on psychoacoustic and perceptual challenges. The 90-minute panel will be immediately followed by a 90-minute playback session. Q&A will be during the panel, but after the playback [in order to allow audience to experience playback without interruptions]

 
 

Saturday, October 20, 1:45 pm — 2:45 pm (1E10)

Game Audio & XR: GA15 - Mixing in VR

Presenters:
Daniel Deboy, DELTA Soundworks - Germany
Christian Sander, Dear Reality GmbH - Düsseldorf, Germany

Mixing audio for Virtual Reality (VR) on 2D Displays can be a frustrating job. We present a new workflow that enables the engineer to mix object based audio directly in VR without leaving the HMD. Starting with an overview of Spatial Audio workflows from recording, editing, mixing, platforms and playback, we’ll be demoing mixing in VR live on stage.

 
 


Return to Immersive & Spatial Audio Track Events