Despite the importance of acoustical diffraction in out natural environment, modeling of such effects is complex and computationally expensive for all but trivial environments and therefore, typically ignored in virtual reality and gaming applications altogether. Driven by the gaming industry, consumer computer graphics hardware and the graphics processing unit (GPU) in particular, has greatly advanced in recent years, outperforming the computational capacity of central processing units (CPUs). Given the widespread use and availability of computer graphics hardware, GPUs have been successfully applied to other, non-graphics applications including audio processing and acoustical diffraction modeling. Here we build upon an existing GPU-based acoustical occlusion/diffraction modeling method that can become problematic when the sound source and the listener are in separate rooms. The proposed method approximates acoustical occlusion/diffraction effects for complex, multi-room environments. The method is computationally efficient allowing it to be incorporated into real-time, dynamic, and interactive virtual environments and videogames where the scene is arbitrarily complex.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.