In virtual auditory environment, sound sources are typically created in two stages: the dry monophonic signal is synthesized, and then, the spatial attributes (like source position, size and directivity) are applied by specific signal processing algorithms. In this paper we present an architecture that combines additive sound synthesis and 3D positional audio at the same level of sound generation. Our algorithm is based on inverse fast Fourier transform synthesis and amplitude-based sound positioning. It allows synthesizing and spatializing efficiently sinusoids and colored noise, to simulate point-like and extended sound sources. The audio rendering can be adapted to any reproduction system (headphones, stereo, 5.1 etc.). Possibilities offered by the algorithm are illustrated with environmental sounds.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.