Events

Pre-conference Events

Home  |   Sponsorships   |   Venue   |   Keynotes and Panels   |   Education   |   Registration   |   Program   |   Call for Contributions  |  Pre-conference Events  |   Committee   |

 

Pre-conference Events Recorded in AltspaceVR:

 

 

Friday, August 14, 2020, 10am-noon Pacific Time

Ramani Duraiswami of University of Maryland / Visisonics Presents:

HRTF Personalization at Scale.

Event starts in AVAR Papers at 10am, then moves to AVAR Lobby after the presentation

Click here to watch the YouTube video of this event!

Ramani Duraiswam is Professor of Computer Science and UMIACS; Director, Perceptual Interfaces and Reality Lab; University of Maryland, College Park, and Co-Founder and CEO, VisiSonics Corporation, College Park.

Abstract:

Differences in the anatomy of people, especially in their ear-shape, lead to differences in the scattered sound received at the entrances to their ear canal. These are usually captured via the anechoic head related impulse response (HRIR), or its transfer function (the HRTF). Combining these with models for early and late reverberation, and possibly head-tracking, audio engines seek to create virtual auditory reality. Due to the lack of easy personalization, generic HRTFs are often used in these, providing a modicum of externalization, but still being subject to both gross errors such as mislocalization or front-back confusions, as well as more subtle issues such as poorer rendering of some content with non-individual transfer functions.

We have developed fast ways of obtaining accurate individual HRTFs using direct measurements (with a reciprocity based measurement technique which allows measurement in a few seconds), and another technique based on fast multipole accelerated boundary element computations using meshes from scans. Combined with other data, including images of subject ears, and some coarse anthropometry, this data of about 1000 subject HRTFs is used as a basis for creating personalized HRTFs, with about 200 including 3D meshes. The longer term goal of the use of this database is in the creation of 3D models for individual HRTFs.

As a short term goal, we are leveraging the images of users' ear images to develop a pipeline based on our previous research to personalize HRTFs for game and movie consumption. Using images obtained from the cameras shipping in smartphones and personal computers we are able to achieve a significantly improved experience. A technique for matching ear images to those in our database, accounting for length scales in both this matching, and in creating interaural time difference allows the creation of a superior personalization. This service, delivered via the cloud, allows this approach to work rapidly and at scale, and can be embedded into applications.

(joint work with Bowen Zhi, Dmitry Zotkin, Adam O’Donovan, David Gadzinski and Liza Williams, VisiSonics Corporation)

 

 

Thursday, August 13, 2020, 10am-noon Pacific Time

EPIC Games Presents:

Soundfield Submixes in Unreal: An Extensible System for Spatial Audio

Event starts in AVAR Workshops at 10am, then moves to AVAR Lobby after the presentation

Click here to watch the YouTube video of this event!

Abstract:  Soundfield Submixes allow for the Unreal Audio Engine to handle virtually any spatial audio paradigm in a way that is well integrated into the existing Unreal Engine pipeline, from Ambisonics encoding and decoding to HRTF renderers and opaque spatial audio solutions like Dolby Atmos and DTS:X. This talk is a key opportunity to describe this feature in detail to the spatial audio community writ large.  Join Epic audio developers Aaron McLeran, Ethan Geller, Max Hayes, and Charles Egenbacher for this exciting talk in AltspaceVR.

 

Further Details:  One of the most challenging aspects of advanced audio spatialization systems is getting them integrated into existing game audio engines in a way that does not sacrifice a sound designer's control. In the past, audio sources that were rendered through an  audio spatialization plugin would have to bypass our submix system, which is how we perform DSP processes on mixdowns of multiple sources. Meanwhile, most audio spatialization plugins internally mixdown individual sources to an intermediate soundfield representation, such as a virtual speaker array or ambisonics, before performing any HRTF processing. Considering this, we realized the best way to integrate audio spatialization solutions into our existing audio engine's feature set was to allow spatialization plugins to extend how we (A) downmix sources to an intermediate representation, and (B) use that intermediate representation to generate an audio stream based on physical speaker positions. Soundfield Submixes are a new feature that allow sound designers to fully control audio spatialization solutions within the submix graph using pre-existing workflows in the audio engine. This talk will describe the following:

  1. How Soundfield Submixes are used in the Unreal Editor
  2. Various examples of previously difficult audio spatialization problems and how Soundfield Submixes solve them
  3. How to implement your own audio spatialization solution using the Soundfield interfaces.

 

 

Tuesday, August 11, 2020, 10am-noon Pacific Time

Karlheinz Brandenburg,  Brandenburg Labs and TU Ilmenau, Germany

Development and Test of Binaural Techniques for Immersive Audio: A Research Survey

Event starts in AVAR Papers at 10am, then moves to AVAR Lobby after the presentation.

Click here to watch the YouTube video of this event!

Abstract:  A perfect auditory illusion via headphones is a dream which is many decades old. The talk will first introduce techniques which have been tried before, talk about their shortcomings and then report on newer research.
We focus on the work done at TU Ilmenau in the last 10 years.
 For testing binaural reproduction systems, the well-established paradigms using comparison to known references in most cases are not available. The talk will discuss the basic difficulties and touch on several approaches to test Immersive Audio as proposed in the MPEG-I standardization approach.
 Finally, some ideas for products using immersive audio in a novel way (Personalized Auditory Reality) will be explained.

This is joint work with:  Florian Klein (TU Ilmenau), Annika Neidhardt (TU Ilmenau), Nils Merten (Brandenburg Labs and TU Ilmenau), Ulrike Sloma (TU Ilmenau), Thomas Sporer (Fraunhofer IDMT), Franciska Wollwert (Brandenburg Labs)

 

 

Monday, August 3, 2020, 10am-noon Pacific Time

AVAR Mixer presented by HTC VIVE®

Event starts in AVAR Lobby at 10am, then moves to AVAR Papers at 11am

Meet and network with fellow sound professionals in this relaxed VR mixer. Featuring flash-talk previews from authors and panelists, as well as orientation from former AltSpace VR sound designer Megan Frazier, it's a great way to lead up to your 2020 AVAR conference.

Andrew Champlin, of HTC Creative Labs, Seattle, will start the flash-talks with "The Spatial Delta: Just-Noticeable-Difference with the VIVE 3DSP".  Andrew will discuss the VIVE 3DSP Audio SDK, which is a collaboration between sound designers and acoustic researchers at HTC to create an accessible spatial audio toolkit for the VR development community.  This will be followed by brief introductions to AVAR Presentations:

Aaron Berkson, The Past, Present, and Future of Immersive Music: Key Developments from Gabrieli to Virtual Reality

Tom Ammermann, Tools and technology for production and application of spatial audio

Tomasz Rudzki, On the Measurement of Perceived Lateral Angle Using Eye Tracking

Yosuke Tanabe, Tesseral Array for Group Based Spatial Audio Capture and Synthesis

Matan Ben-Asher, Virtual Reality Music in the Real World

AES - Audio Engineering Society