A theory and a system for capturing an audio scene and then rendering it remotely are developed and presented. The sound capture is performed with a spherical microphone array. The sound field at the location, and in a region of space in the neighborhood, of the array is deduced from the captured sound and represented using either spherical wave-functions or plane-wave expansions. The representation is then transmitted to a remote location for immediate rendering or stored for later reproduction. The sound renderer, coupled with the head tracker, reconstructs the acoustic field using individualized head-related transfer functions to preserve the perceptual spatial structure of the audio scene. Rigorous error bounds and a Nyquist-like sampling criterion for the representation of the sound field are presented and verified.
http://www.aes.org/e-lib/browse.cfm?elib=13369
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.
Learn more about the AES E-Library
Start a discussion about this paper!