It is possible to create highly immersive listening experiences with many loudspeakers that are carefully positioned and calibrated. Such systems are commonly found in research laboratories and recording studios but are unrealistic in home environments. It is necessary to find new ways to deliver immersive spatial audio to audiences at scale.
Whilst listeners might not be prepared to install high channel count systems, it is likely that most living rooms contain a large number of devices that can reproduce media, including personal devices (such as mobile phones and tablets) and smart speakers. As such devices become more integrated, they can be used to facilitate increased immersion. For example, second screen experiences can provide additional features for immersion and personalisation on top of standard broadcasts. The audio capabilities of such devices are relatively untapped, but object-based audio offers the possibility of repurposing audio content to make optimal use of ad hoc arrays.
Recent experiments and demonstrators have shown potential in this approach to spatial audio reproduction. Breaking away from the paradigm of matched loudspeakers at the same distance from the listener can offer a surprisingly immersive and high-quality listening experience.
In this workshop, the challenges of spatial audio reproduction in the home will be discussed, and the concept of media device orchestration for immersive spatial audio introduced. We will present the results of perceptual evaluation experiments demonstrating the potential advantages and current problems with this method of spatial audio reproduction. The technology behind device synchronisation will be reviewed. The challenges and opportunities of content production will be discussed, focussing on recent demonstrators such as The Vostok-K Incident (an immersive audio drama for orchestrated devices released by the S3A project). The potential next steps for this technology will be considered, with future research and implementation challenges outlined.