Vision Steered Beam-Forming and Transaural Rendering for the Artificial Life Interactive Video Environment (ALIVE)
This paper describes the audio component of a virtual reality system that uses remote sensing to free the user from body-mounted tracking equipment. Position information is obtained from a camera and used to constrain a beam-forming microphone array, for far-field speech input, and a two-loudspeaker transaural audio system for rendering 3-D audio.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is temporarily free for AES members.