Virtual audio synthesis and playback through headphones by its virtue have several limitations, such as the front-back confusion and in-head localization of the sound presented to the listener. Use of non-individual head related transfer functions (HRTFs) further increases these front-back confusion and degrades the virtual auditory image. In this paper we present a method for customizing non-individual HRTFs by embedding personal cues using the distinctive morphology of the individual‚Äôs ear. In this paper we study the frontal projection of sound using headphones to reduce the front-back confusion in 3-D audio playback. Additional processing blocks, such as decorrelation and front-back biasing are implemented to externalize and control the auditory depth of the frontal image. Subjective tests are conducted using these processing blocks, and its impact to localization is reported in this paper.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.