Thursday, September 29, 9:00 am — 10:30 am (Rm 408A)
Abstract:
In an industry deluged by acronyms, Immersive Audio appears to have leapfrogged the trend (although I.A. or 3D Audio, may suffice). As with Surround Sound, Quad Sound, and their various 3.1 – 5.1 – 7.1…, etc., iterations, much of the noise made by these new innovations is focused on hype rather than on specific real world listener/viewer needs or actual desires. That said, I.A. systems for producing, distributing and receiving this new sound experience do exist, they work and, they are proliferating.
This Panel Discussion will feature four experts in radio and TV broadcast technology, systems development /integration, and studio design. Panelists include: Grammy Award-winning engineers Robert Margouleff and Matt Marrin, and award-winning Studio Designer, Chris Pelonis. Moderated by WSDG Founding Partner, John Storyk, the discussion will explore studio and gear design issues both acoustic and technological. Areas to be covered will include: What needs to be done to equip, upgrade, and future proof existing studios for the production, broadcast and streaming of Immersive Audio? What creative and/or technical issues differentiate traditional speaker performance from headphone/earbud reception for I. A. What loudness issues need to be addressed, e.g., Noise and quietness, Internal room responsiveness, Speech vs. music, Reflection and Absorption.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Thursday, September 29, 10:45 am — 12:15 pm (Rm 408A)
Abstract:
Listener fatigue is directly related to the epidemic of hearing loss among both the producing and consuming population. It is essential for the producers of audio content to understand what listener fatigue is and what causes it. The panel will discuss the physiological and psychological aspects of the phenomenon. Each panelists has his own perspective on what can be done to reduce the phenomenon.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Thursday, September 29, 2:15 pm — 3:45 pm (Rm 408A)
Abstract:
There is no doubt that audio and sound is essential to every storyline. It enhances the visual experience and adds depth, dimension, and emotion. It is sound that engages the viewer and drives the storyline. The creativity of the audio design and balancing of multiple audio elements produces the key "sonic signature" of the sound and brings a unique life to the content. In the spirit of SMPTE’s centennial year, this panel will look back at key developments in audio and sound technologies for both broadcast and cinema and how these have affected production for live events, movie shoots and post-production sound. The panel will discuss important milestones such as digital recording, editing and distribution, immersive audio and, perhaps, if we’re lucky, share a few personal experiences and memories as they travel along with the evolution.
Coproduced with Society of Motion Picture and Television Engineers
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Thursday, September 29, 4:00 pm — 5:30 pm (Rm 408A)
Abstract:
Immersive audio in the home is now way beyond 5.1 channel surround. The most advanced systems have up to 22.2 channels and place speakers at different elevations, including the ceiling. Beyond the number of speakers, object oriented audio now allows people to customize the way they consume content like never before. It lets them increase the volume of the voice track or decrease the volume of the background sounds to better understand voices. It lets them silence voices, like the announcers at a ballgame, if they simply want to experience the background noise in their program. And it lets them customize their experience in many other ways. This panel will cover the latest developments in bringing more advanced and more flexible audio into the home
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Friday, September 30, 9:00 am — 10:30 am (Rm 408A)
Abstract:
4k and 8k UHD broadcasting and streaming is edging beyond standards creation and quickly becoming ubiquitous, with a more than tenfold increase in consumer product over the past year. NHK and U.S. broadcasters are on the air with ATSC 3.0 and Super Hi-Vision experimental transmitters, while cable and satellite providers are already delivering movies and sports; from Super bowl 50, The Master’s Tournament, and the Summer Games. Immersive 3D and object oriented audio adds a striking polish to wider visual resolution, gamut, and dynamic range. This session has been very popular in past years and we have another group of impressive speakers from the myriad facets of audio production and delivery.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Friday, September 30, 12:15 pm — 1:15 pm (Rm 408A)
Abstract:
Technical Committee Meeting on Broadcast and Online Delivery
Friday, September 30, 1:30 pm — 3:15 pm (Rm 409B)
Chair:
Amandine Pras, Paris Conservatoire (CNSMDP) - Paris, France; Stetson University - DeLand, FL, USA
EB2-1 A Broadcast Film Leader with Audio Channel, Frequency, and Synchronism Test Properties—Luiz Fernando Kruszielski, Globo TV Network - Rio de Janeiro, Brazil; Rodrigo Meirelles, Globo TV Network - Rio de Janeiro, Brazil
Universal film leaders, commonly known as “countdowns,” have been an important tool to synch audio and video. In a broadcast production, the material goes through several stages where audio and video are edited and processed, and time is a very precious resource. Also, it is important to minimize possible errors in the production chain. We propose a film leader format that, in a single 10 second clip, would be possible to do a preliminary check on aspects such as surround and stereo channel identification, relative channel level and frequency response, as well as synchronism. The proposed film leader has been tested and integrated in a Brazilian Television Network with very good results.
Engineering Brief 286 (Download now)
EB2-2 Live vs. Edited Studio Recordings: What Do We Prefer?—Amandine Pras, Paris Conservatoire (CNSMDP) - Paris, France; Stetson University - DeLand, FL, USA
This pilot study examines a common belief in written classical music that a live recording conveys a more expressive musical performance than a technically flawless studio production. Two tonmeister students of the Paris Conservatoire recorded a six-dance baroque suite and a four-movement romantic sonata in concert and in studio sessions with the same microphone techniques and in the same venue for both conditions. Twenty listeners completed an online survey to rate three versions of the dances and movements, i.e., the concert performance, the firstt studio take, and the edited version. Results show that listeners preferred the edited versions (44%) more often than the firstt studio takes (29%) and the concert performances (27%).
Engineering Brief 287 (Download now)
EB2-3 Rondo360: Dysonics’ Spatial Audio Post-Production Toolkit for 360 Media—Robert Dalton, Dysonics - San Francisco, CA, USA; Jimmy Tobin, Dysonics - San Francisco, CA, USA; CCRMA - Stanford, CA, USA; David Grunzweig, Dysonics - San Francisco, CA, USA
Rondo360 is Dysonics’ toolkit for spatial audio post-production, supporting multiple workflows including multichannel, Ambisonics, and Dysonics’ own native 360 Motion-Tracked Binaural (MTB) format. Rondo360 works with all input formats—live or prerecorded—from traditional or sound field microphones, and exports to a wide array of formats depending on desired content distribution. Rondo360 integrates seamlessly with all DAWs by adding a final layer onto the creator’s existing workflow, and it comes bundled with a suite of custom mastering tools (Mixer, Compressor, Limiter, and Reverb) that work on multichannel sound field content. With support for RondoMotion, Dysonics' wireless head-tracking device, creators can monitor their 360 mixes in real-time. Rondo360 also provides an intuitive audio/video sync and export functionality along with live broadcasting support.
Engineering Brief 288 (Download now)
EB2-4 Withdrawn—N/A
EB2-5 Mixing Hip-Hop with Distortion—Paul "Willie Green" Womack, Willie Green Music - Brooklyn, NY, USA
The grit and grime of Hip-Hop doesn't have to be metaphorical. With the vast array of saturation tools available, distortion is no longer just something to remove from recordings; and the huge and aggressive sounds in Hip-Hop music can benefit specifically. From subtly warming drums and keyboards to mangling vocals and samples, this brief will demonstrate techniques for creatively distorting urban music. Exploring tape emulation, parallel vocal distortion, drum crushing, and more, I will investigate how a bit of dirt can drastically affect a mix.
Engineering Brief 290 (Download now)
EB2-6 Smart Audio Is the Way Forward for Live Broadcast Production—Peter Poers, Junger Audio GmbH - Berlin, Germany
Today’s broadcast facilities are facing ever-increasing demands on their resources as they strive to keep up with consumers who expect more content on more devices both where and when they want it. To attract and retain viewers, consistent, stable, and coherent audio is a vital requirement. One aspect that is particularly important to pay attention to is speech intelligibility. This is most critical and difficult in a live broadcast situation. The Smart Audio concept is to utilizing real time processing algorithms that are both intelligent and adaptive. Devices need to be fully interoperable with others in the broadcast environment and need to seamlessly integrate with both playout automation systems and logging and monitoring processes. The Engineering Brief will present some dedicated and proofed algorithms and practical use cases for Smart Audio
Engineering Brief 291 (Download now)
EB2-7 Towards Improving Overview and Metering through Visualization and Dynamic Query Filters for User Interfaces Implementing the Stage Metaphor for Music Mixing—Steven Gelineck, Aalborg University Copenhagen - Copenhagen, Denmark; Anders Kirk Uhrenholt, Copenhagen University - Copenhagen, Denmark
This paper deals with challenges involved with implementing the stage metaphor control scheme for mixing music. Recent studies suggest that the stage metaphor outperforms the traditional channel-strip metaphor in several different ways. However, the implementation of the stage metaphor poses issues including clutter, lack of overview and monitoring of levels, and EQ. Drawing upon suggestions in recent studies, the paper describes the implementation of a stage metaphor prototype incorporating several features for dealing with these issues, including level and EQ monitoring using brightness, shape, and size. Moreover we explore the potential of using Dynamic Query filtering for localizing channels with certain properties of interest. Finally, an explorative user evaluation compares different variations of the prototype, leading to a discussion of the importance of each feature.
Engineering Brief 292 (Download now)
Friday, September 30, 1:30 pm — 3:00 pm (Rm 408A)
Abstract:
Bob Orban is best known in the professional broadcast industry for the Orban Optimod FM audio processor. It introduced patented non-overshooting lowpass filters to the FM MPX audio chain, dramatically increasing loudness and signal to noise ratio without audible side-effects. While Optimod could be used as a lethal weapon in the early FM loudness wars, Bob’s original goal was create a processor with much lower distortion that anything then available. Optimod was a powerful tool that didn’t always need to run at “11” to achieve its goals. This was a true game-changer for FM Radio broadcasting.
In addition, Bob created innovative recording studio and production tools, such as the Orban Stereo Synthesizer, de-essers and reverbs. Bob has also been an active participant in the National Radio Systems Committee.
This interview by longtime friend and colleague Greg Ogonowski will discuss the vast progress in audio processing technology that has shaped and molded delivery of audio for broadcast and streaming throughout the world. Join us for a true trip down tech memory lane, and up to current developments in broadcast audio processing.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Friday, September 30, 3:15 pm — 4:45 pm (Rm 408A)
Abstract:
No single area of media distribution is developing as quickly and with as much agility as Over-the-Top (OTT) Television. New media formats (such as 4K video) are typically making their first appearances there, so it could be expected that the immersive and personalized features of Next-Generation Audio (NGA) services for television sound may also debut in the OTT world. Meanwhile, current challenges of interoperability and loudness management in OTT TV still require some sorting out. Find out the latest on audio in today’s and tomorrow’s OTT TV environment, from experts currently working on solutions in this dynamic session, which will also report on development of AES Guidelines for audio practices in OTT TV and online video services.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Saturday, October 1, 9:00 am — 10:30 am (Rm 408A)
Abstract:
Many radio stations are building performance spaces to further engage listeners with more original content. These spaces are not only for traditional radio but are also for multi-media production to be presented on the internet. Not all spaces are originally built for this, some are re-purposed space. We will explore choosing and equipping a performance space in an existing facility with limited resources.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Saturday, October 1, 10:45 am — 12:15 pm (Rm 408A)
Abstract:
Everything is going IP. And the “Internet of Things” means that, even in our world of audio, IP will soon be everywhere. What does this mean about the wire and cable used in installations? There are now thousands of products that could be used. And, while fiber optic cable and wireless transmission might give you options, the majority of these installations will probably be done on “traditional” twisted pairs, such as those in Category 5, 5e, 6, 6a, 7 and – soon to arrive – Category 8. You think you have a hard time figuring out which mic cable to use? These IP/Ethernet cables will be even more of a challenge. How do you approach this? How do you decide? Our panel will attempt to give you clear guidelines to choose cables for IP applications.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Saturday, October 1, 5:00 pm — 6:30 pm (Rm 404AB)
Abstract:
Podcasts are an exciting opportunity for those of us in the audio world. Listenership increases yearly , companies dedicated solely to podcast production are springing up left and right, and a new audience is falling in love with listening. As a result, the traditional broadcast roles of audio engineer, sound designer, and producer are morphing and blending, so we must find ways to position ourselves as a valuable resource. Experimentation is rampant and there is more possibility than ever for creating engaging sound — but our audience is listening in challenging environments! Nonetheless we continue to push our art forward. This panel of experts will deal with these topics as well as discuss the craft of sound design and the challenges of mixing for the podcast audience.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |
Sunday, October 2, 11:30 am — 1:00 pm (Rm 502AB)
Abstract:
Follow the process from concept to “fade to black” as Mark King shares the methods and techniques of mixing television's most challenging live broadcast of Grease Live.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery |