Perception-Based Room Rendering for Auditory Scenes
A new rendering algorithm is introduced, which allows modeling a given room parameterized by a set of perceptual parameters. The processing cost and memory requirements are minimized; and the system is capable of : reproducing a large number of sound sources and independently processing many different listening positions. Rather than independently reproducing a large number of reflections (as in mirror-image rendering or ray tracing), sets of reflections are combined in a simple statistical representation of direction of incidence, diffuseness, absorption, etc. For all perceptual parameters, a statistical representation is defined, which can be easily used to reproduce impulse responses for any number of reproduction channels from 2 to nth. For a high number of reproduction channels, wave-field synthesis techniques can be used to reproduce a complete sound field, rather than a sweet spot-based perception for one listening position.:
This paper costs $20 for non-members, $5 for AES members and is free for E-Library subscribers.