Sections

AES Section Meeting Reports

New York - May 22, 2012

Meeting Topic:

Moderator Name:

Speaker Name:

Other business or activities at the meeting:

Meeting Location:

Summary

Ken introduced Charles to a modest but engaged crowd with diverse backgrounds from theatre designers to audio and re-recording mixers. Charles opened his talk with a brief discussion of the motivations of Dolby in developing their new cinema sound solution, "Atmos", which they recently launched at Cinema Con. Those motivational factors being Dolby's continual interest in advancing the state of the art in cinema sound, and a feeling that the cinema industry as a whole was ready for the next step. Furthermore, they wished to add value to the three key sections of the community: -content providers (improving the sonic palette and allowing a "mix once" workflow for Atmos and all legacy formats)
-distributors (maintaining the existing audio distribution model)
-exhibitors (allowing opportunities for premium presentation and market differentiation).

The presentation then went on to detail Dolby's discussions with stakeholders as to the requirements of such a next step, the 4 conclusions arising from these discussions, and ways in which the final realization of the format met these requirements. The first of the conclusions was to provide consistent, impeccable sound quality. Atmos addresses this by utilizing a lossless codec at a sampling rate of 96kHz 24 bits deep. The next conclusion raised, was that sound images off the screen should be supported. To this end, Atmos provides the ability for full spectrum surround, and allows the mixer to specify point source images off screen (more on this later!). The third of the conclusions Charles described, was to aim for universal compatibility. To achieve this goal, Atmos allows for simplified creation and distribution. The format is both backwards compatible and future proof, and allows space for exhibitors to innovate and provide differentiation. The workflow to produce Atmos mixes allows true mix once, play anywhere, legacy support. The final requirement arising from the discussions was to enable a sense of elevation to the audio, which Atmos does by specifying stereo sets of ceiling speakers.

This concluded the introduction to the system, and Charles proceeded into the significant details. This started by going through the recommended loudspeaker layout for Atmos. Atmos specifies speakers in a 7.1 configuration in the horizontal plane, similar to existing Dolby Surround 7.1 specifications, but allowing for bass managed sub-woofers for the surround channels. This extends the frequency range of the surround channels across the full range of human hearing. To this, Atmos adds a pair of ceiling speakers / speaker arrays to provide elevation, and allows for surround speaker placement in the front 1/3 of the room, something that earlier Dolby Surround formats did not allow. The loudspeaker layout could be referred to as similar to a 9.1 system. However, as will become very apparent, it is critical for Atmos that all speakers now have their own discrete amplifier. In previous Dolby Surround systems, surround speakers for the same surround channel could share amps.

Charles then went on to detail one of the more radical deviations of Atmos to our traditional ideas for cinema sound, describing the Atmos signal model. Atmos provides for 2 distinct types of audio signal, channels and objects. The channels are the same as legacy audio channels and are sent to either discrete loudspeakers (in the case of LCR) or diffuse arrays, in the case of surround channels (Ls, Rs, Lbs, Rbs, Lceiling, Rceiling). The main point of the off-screen channels in this case is to provide for complex audio textures and ambiences similar to traditional surround techniques, and to provide efficient authoring. The audio objects, however, are used to define point sources and are to be custom rendered for each playback system i.e. each Ls speaker, say gets the same audio from the Ls channel with custom rendered (specific to each loudspeaker) audio for all object channels. This custom rendering is based on object position data provided in the audio mix, and loudspeaker position (for each individual loudspeaker) entered into the processor when the system is commissioned. This model allows the engineer to make the choice in the mix whether to "bake" the position in at the mix (providing diffuse surround sound) or render later (giving an accurate off-screen point source).

The next stage was the lossless codec used to transmit and store the audio data. The most interesting point around the codec being that it gave a scalable sample rate, i.e. 96kHz and 48kHz data could be transmitted efficiently in parallel, such that 1 stream of data gives the first 48kHz of data and the next stream provides the rest up to 96kHz. Furthermore, the packaging used is efficient, allowing 128 signals at 48kHz sampling rate, giving plenty of room for the audio objects. The encoded program is stored and distributed as a DCP (digital cinema package) using the aux audio section of the DCP to transmit the Atmos data. This allows the legacy audio to maintain its place, providing backwards compatibility for current theaters.

The next topic of discussion was the extent of the object metadata. Charles gave us a small number of examples of some of the metadata (beyond position), which could be used by the mixer to describe each audio object. These included size, bleed, de-correlation to increase source width, whether the position is to be referenced to the screen or the room, and even a flag which specifies that the object position should be approximate, and the object should be directed to the nearest single loudspeaker, and no others.

Another very significant advance in the new standard is the room correction, now included in the cinema processors. Traditionally, the cinema would have to line up its replay system to published curves (determined by room size) by hand when commissioning each room. This would be done for each discrete channel and averaged over position. Charles showed us some of the existing plots of rooms, which were frequently not ideal. Now that audio objects are included in the specification, it is important that each discrete speaker is adequately calibrated and lined up to meet the specifications. To aid in this potentially mammoth task, Dolby has included an automatic alignment system in the processors. The processor also allows for time domain adjustment between speakers where necessary, and the curves that they line up to are programmable, to allow for future changes in specification. Charles showed us the frequency domain plots for the results of the automatic line up, and they look exceptionally accurate.

After a brief discussion on the suitability of the published curves, there then followed lively individual discussion.

The NY committee would like to thank Charles for his time and for providing a fascinating insight into this development, Chris Hoffman at the New School University and all attendees. Further information about Dolby Atmos can be found at www.dolby.com/atmos

Written By:

More About New York Section

AES - Audio Engineering Society