In mixed reality (MR) applications, digital audio objects are rendered via an acoustically transparent playback system to blend with the physical surroundings of the listener. This requires a binaural simulation process that perceptually matches the reverberation properties of the local environment, so that virtual sounds are not distinguishable from real sounds emitted around the listener. In this paper we propose an acoustic scene programming model that allows pre-authoring the behaviors and trajectories of a set of sound sources in a MR audio experience, while deferring to rendering time the specification of the reverberation properties of the enclosing room.
Download Now (496 KB)
The Engineering Briefs at this Convention were selected on the basis of a submitted synopsis, ensuring that they are of interest to AES members, and are not overly commercial. These briefs have been reproduced from the authors' advance manuscripts, without editing, corrections, or consideration by the Review Board. The AES takes no responsibility for their contents. Paper copies are not available, but any member can freely access these briefs. Members are encouraged to provide comments that enhance their usefulness.