Object-based sound reproduction, on the one hand, allows sound engineers to interact with sound objects, not only during production but also in the reproduction venue. On the other hand object-based systems are quite complex. Multicore audio processors are used to render complex sound scenes consisting of hundreds of audio objects to be reproduced using a large number of loudspeaker channels. This results in the need for applications optimally adapted to the user. Working tasks need to be parallelized. This paper outlines a software architecture that helps to incorporate the multitude of audio processing components of an object-based spatial audio environment into a unified system. The architecture allows multiple sound engineers to access, monitor, control, and/or change these system components parameters collaboratively using wireless mobile devices.
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.