Perception-Based Room Rendering for Auditory Scenes
A new rendering algorithm is introduced, which allows modeling a given room parameterized by a set of perceptual parameters. The processing cost and memory requirements are minimized; and the system is capable of : reproducing a large number of sound sources and independently processing many different listening positions. Rather than independently reproducing a large number of reflections (as in mirror-image rendering or ray tracing), sets of reflections are combined in a simple statistical representation of direction of incidence, diffuseness, absorption, etc. For all perceptual parameters, a statistical representation is defined, which can be easily used to reproduce impulse responses for any number of reproduction channels from 2 to nth. For a high number of reproduction channels, wave-field synthesis techniques can be used to reproduce a complete sound field, rather than a sweet spot-based perception for one listening position.:
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is temporarily free for AES members.