Meeting Topic: Immersive Audio and Ambisonics for VR and Video Games
Moderator Name: CRAS AES Faculty Advisor - David Kohr
Speaker Name: Peter Costa - CEO of Baltu Studios
Meeting Location: Conservatory of Recording Arts and Sciences 1205 N. Fiesta Blvd Gilbert Az
Peter began by introducing himself and describing his journey in professional audio. In 2014 he started working in video game audio, and three years later he became interested in using VR to improve lives. His VR has been used to treat PTSD and for educational purposes. The objective for his virtual reality is to immerse the audience so they feel completely involved in something. He asks "what do you want the user to think and feel before, during, and after the experience?" This involves a combination of several factors, which he identifies as Visual, Sound, Agency, and Space. The visual component of a performance is what distinguishes professionals from amateurs. The sound component looks at how audio is perceived by the mind. He approaches agency with the word interface because it is an interaction or exchange between two parties. The two ways that people interface with things are through reactivity and interactivity. The space element has a lot to do with what influences the end-user and helps to create the environment. He then went over the concepts of 3D sound, ambisonics, and HRTF (Head Related Transfer Function). 3D sound looks to create spatial audio, looking at where sound is created within a 3D plane, and how the sound interacts with the user. Ambisonics refers to special microphones used to capture the sound field. Ambisonics also deals with spherical harmonics, which looks at where air molecules are vibrating within a sphere. They then use a tetrahedral microphone to capture and craft a 3D sonic image. Ambisonics create a sphere of sound, and the listening of that sound is referred to as "decoding." That's because the sphere itself is "format agnostic" and must know where in the sphere the user is interacting in order to output information. The third concept, HRTF, helps us to recreate a 3 dimensional sound on a device such as headphones. HRTF puts the virtual microphone in the center of the sphere and takes account of head, ears, shoulders, and anything else that may affect our perception of the sound. In order to create something in HRTF, they first put someone into an anechoic chamber. That person has two microphones placed in their ears and a sound source is moved around them to 512 different points. That HRTF filter is then placed into the game engine and helps to decode the ambisonic sphere by filtering frequencies according to their positions. Peter spoke about an organization he is involved in, telling us about PHX VR for Good, which tries to use VR technology to create positive change in the world. They help create museum exhibits, PTSD treatment therapy, cultural sensitivity VR programs and other immersive experiences that inspire and educate. He closed the event by speaking about his experience in the music industry. He said that it is best to have many tools in your toolbelt because you never know where the industry will go.
Written By: CRAS AES Officeholder - Peter G.