In this paper, an algorithmic approach towards computing quantifiable metrics regarding HRTF spectral magnitude synthesis performance of virtual sound systems, such as those used in VR/AR/MR environments, is presented. Utilizing regularized regression in parallel with a statistical information theory technique, the system provides a detailed analysis of a virtual spatializer’s spectral magnitude rendering accuracy at a given point in space. Applying the proposed system to the final signal processing stage of a spatial audio rendering pipeline enables the engineer to establish critical performance quantities for benchmarking future modifications to the rendering channel against. The proposed system demonstrates an important step towards standardizing and automating virtual audio system evaluation and may ultimately act as a participant substitute during critical listening tasks.
Download Now (1.5 MB)