High Order Spatial Audio Capture and Its Binaural Head-Tracked Playback Over Headphones with HRTF Cues
A theory and a system for capturing an audio scene and then rendering it remotely are developed and presented. The sound capture is performed with a spherical microphone array. The sound field at the location, and in a region of space in the neighborhood, of the array is deduced from the captured sound and represented using either spherical wave-functions or plane-wave expansions. The representation is then transmitted to a remote location for immediate rendering or stored for later reproduction. The sound renderer, coupled with the head tracker, reconstructs the acoustic field using individualized head-related transfer functions to preserve the perceptual spatial structure of the audio scene. Rigorous error bounds and a Nyquist-like sampling criterion for the representation of the sound field are presented and verified.
Click to purchase paper or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $20 for non-members, $5 for AES members and is free for E-Library subscribers.