This tutorial offers a technical and practical view on view on this fast-evolving evolving topic based on field experience with multiple large-scale projects over the past year. Challenges and opportunities have appeared in specific technical areas: object-definition, mixing workflow, control strategies, loudspeaker-system design, compatibility with broadcast and post-production scenario. We show how challenges can be addressed at all steps of the project, building ground for a successful development of object-based in live sound.
Object-based audio can be considered as a natural extension of the track on a mixing console. Combining this approach with multi-array speaker configurations can provide huge benefits in terms of localization, intelligibility, and immersion, while retaining a complete flexibility from the rendering setup.
Parameters such as the sound objects position in space, their proximity, depth, can now be addressed for most of the audience. The control of the object parameters during the show becomes part of the creative process, and different scenarios are arising, would it come from console snapshots engines, DAW automation, or live tracking systems. Learnings from large-scale projects will be shared.
Object-based mixing also raises questions with regard to best practices for effects such as reverberations, would it come from input processing or engines incorporated in the audio renderer.
Finally, object-based mixing can shine within complex configurations involving multiple audio renderers, such as simultaneous immersive live and broadcast. Some recent experiments will be presented and discussed, highlighting the benefits of current efforts to standardize object metadata formats such as the EBU Audio Definition Model.