Using nonindividualized HRTFs in virtual audio synthesis produces front-back confusions, up-down reversals, in-head localization, and timbral coloration. Elevation and frontal localization are found to be most affected. In contrast, obtaining individualized HRTFs is a tedious process that involves complex acoustical measurements for each individual. Having a model of HRTF that does not involve tedious acoustical measurements would make the process much easier. In this research, individualization of the median plane HRTFs is explored using frontal projection headphones with a spherical head model because the frontal positioning of the headphone transducer inherently captures the idiosyncratic frontal spectral cues. To create the HRTFs, the important peaks (P1) and notches (N1, N2) are extracted first from the frontal headphone response and then shifted in frequency in accordance with the elevation angle. Detailed subjective experiments indicated that subjects were able to localize the virtual sound sources accurately with modeled HRTFs with results similar to individualized HRTFs.
http://www.aes.org/e-lib/browse.cfm?elib=18536
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.
Learn more about the AES E-Library
Start a discussion about this paper!