With the increasing demand for AR/VR technologies, enabling accurate reproduction of binaural spatial audio through obtaining individualized Head Related Transfer Functions (HRTFs) has become a high priority subject of research. Meanwhile, recent developments in Generative AI have been providing substantial success in several domains involving audio, language, images etc. In this work we propose a framework to use a 3D Convolutional Neural Network (CNN) based Vector-Quantized Variational AutoEncoder (VQ-VAE) to first learn a regularized latent representation from the HRTFs, which leverages both spatial and spectral correlations between neighboring magnitude HRTFs. We further use the Transformer architecture to find mappings between latent sequences derived from spatially-sparse HRTF measurements and the latent sequences defining the HRTFs having a high spatial resolution. We thereby predict HRTFs at 1440 locations given sparse HRTF measurements from 25 locations, also allowing for freedom over the sampling locations of the sparse HRTFs. We achieve a mean Log Spectral Distortion (LSD) error of 4.5 dB while also demonstrating a contrived but informative case of obtaining a mean LSD of 3 dB when evaluated over 10 validation subjects.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.