Meeting Topic: Joint meeting with SMPTE on Television Broadcasting's new ATSC 3.0 Standard
Moderator Name: Jim DeFillipis
Speaker Name: Richard Chernock, Ph.D., Chief Science Officer of Triveni Digital
Other business or activities at the meeting: Joint Meeting with SMPTE Hollywood Section
Meeting Location: Linwood Dunn Theater, Academy of Motion Picture Arts and Sciences, Hollywood, California
On Thursday, September 28, 2017, the Audio Engineering Society Los Angeles (AESLA) held a joint meeting with the Society of Motional Picture and Television Engineers Hollywood (SMPTE-Hollywood), and the Los Angeles Society of Television Engineers (STE), at the Linwood Dunn Theatre of the Academy of Motion Picture Arts and Sciences (AMPAS) in Hollywood. After a lively and well-attended social mixer, the stage was taken by Dr. Richard Chernock, Chief Science Officer of Triveni Digital and the Chair of the Advanced Television Standards Committee Gen3 (ATSC3.0). This presentation was sponsored by the Distinguished Lecturer Program of the Institute of Electrical and Electronics Engineers' Broadcast Technology Society (IEEE-BTS) in their interest in disseminating knowledge and information about broadcast technologies to a broader audience.
The ATSC 3.0 is a comprehensive set of standards most of which has already been approved, but parts of which are still in development. It covers topics such as Digital Video and Audio Compression, Display Monitors, Cameras, Satellite Transmission, Audio Loudness, Channel Allocation, 8VSB modulation, 3D, Digital Radio, Antennas, Image Artifacts, Directionality Pattern design for Antennas, Multimedia Broadcast Services including Distributed Transmission, Network Diversity, and Temporal Dependent Rate Distortion Optimization in Motion Compensated Video Coding. Dr. Chernock noted that this standard was under development by joint efforts of the USA, UK, Canada, Germany, Singapore, Argentina, China, Spain, Korea, and Malaysia, and he touched on each and every one of the topics and described the progress that has far been made.
Dr. Chernock described the origins of the ATSC (1.0) in 1983 by multiple organizations, and its efforts to advanced terrestrial digital television broadcasting. In 1995, it defined both a digital television broadcasting standard which included high-definition video and digital 5.1 audio, and included electronic program guides, closed captioning services, and extensibility. Since that time, however, multimedia has become ubiquitous, and is played on cell phones, desktop computers, laptops, watches, and tablet devices, as well as traditional televisions, which themselves have morphed from three-dimensional boxes to nearly two-dimensional sheets of glass. Broadcast technologies have also become more bi-directional, with information from viewers flowing back to broadcasters via the internet. This has enabled advertising targeting and new business models based upon precise, one-to-one marketing. The development of the new ATSC 3.0 standard has been driven by multiple demands, for higher resolution images and sound, enabled by greater compression efficiencies, an increasing scarcity of spectrum worldwide, alternate delivery path options, personalization and interactivity, and demands of advertisers.
For video aficionados, ATSC 3.0 incorporates a wider color gamut, higher definition pictures, greater contrast, and higher frame rates. All of these have become possible because of the efficiency of the compression algorithms used in ATSC 3.0, now nearing the theoretical limits.
For the members of the Audio Engineering Society, the key changes are the enabling immersive and personalized audio. While 5.1 was an important advance over the prior stereo audio standard, ATSC 3.0's 7.1+4 Immersive Audio allows for object-oriented audio on top of the greater number of channels, allowing for sounds not just front back and sides, but from above as well. ATSC 3.0 also allows for broad-casting in multiple languages, selectable by the users, and the selection and synchronization of streams simulcast via the internet when the capability is present on the playback device. Many of these capabilities have been enabled by basing the transmission standard on TCP/IP; in other words, all data is sent and arrives in standard internet data packets. The inclusion of audio metadata in ATSC 3.0 allows the audio stream playback to be optimized in real time to the specific equipment, whether an in-home theatre or a mobile phone with ear buds.
The AES-Los Angeles would like to thank Dr. Richard Chernock for his detailed and informative presentation, and SMPTE-Hollywood and STE for jointly sponsoring this presentation.
Written By: John Svetlik