Wave Field Synthesis enables the reproduction of complex auditory scenes and moving sound sources. Moving sound sources induce time-variant delay of source signals. To avoid severe distortions, sophisticated delay interpolation techniques must be applied. The typically large numbers of both virtual sources and loudspeakers in a WFS system result in a very high number of simultaneous delay operations, thus being a most performance-critical aspect in a WFS rendering system. In this article, we investigate suitable delay interpolation algorithms for WFS. To overcome the prohibitive computational cost induced by high-quality algorithms, we propose a computational structure that achieves a significant complexity reduction through a novel algorithm partitioning and efficient data reuse.
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.