A solution to produce virtual sound environments based on the physical characteristics of a modeled complex volume is described. The goal is to reproduce, in real time, the sound field depending on the position of the listener and to allow some interactivity (change in material characteristics for instance). First an adaptive beam tracing algorithm is used to compute a geometrical solution between the sources and several positions inside that volume. This algorithm is not limited to polygonal faces and handles diffraction. Then, the precomputed paths, once ordered and selected, are auralized and an adaptive artificial reverberation is used. New techniques to allow fast and accurate rendering are detailed. The proposed approach provides accurate audio rendering on headphones or within advanced multi-user immersive environments.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.