One of the key limitations in spatial audio rendering over loudspeakers is the degradation that occurs as the listener's head moves away from the intended sweet spot. In this paper, we propose a method for designing immersive audio rendering filters using adaptive synthesis methods that can update the filter coefficients in real time. These methods can be combined with a head tracking system to compensate for changes in the listener's head position. The rendering filter's weight vectors are synthesized in the frequency domain using magnitude and phase interpolation in frequency sub-bands.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.