AES London 2010
Broadcast Audio Event Details

Saturday, May 22, 10:30 — 12:30 (Room C6)

W1 - Audio Network Control Protocols


Chair:
Richard Foss, Rhodes University - Grahamstown, South Africa
Panelists:
John Grant, Nine Tiles
Robby Gurdan, Universal Media Access Networks (UMAN)
Stefan Ledergerber, Harman Pro Audio
Philip Nye, Engineering Arts
Andy W. Schmeder, University of California, Berkeley - Berkeley, CA, USA

Abstract:
Digital audio networks have solved a number of problems related to the distribution of audio within a number of contexts, including recording studios, stadiums, convention centers, theaters, and live concerts. They provide cabling ease, better immunity to interference, and enhanced control over audio routing and signal processing when compared to analog solutions. There exist a number of audio network types and also a number of audio network protocols that define the messaging necessary for connection management and control. The problem with this range of protocol solutions is that a large number of professional audio devices are being manufactured without regard to global interoperability. In this workshop a panel of audio network protocol experts will share the features of audio network protocols that they are familiar with and what features should appear within a common protocol.


Saturday, May 22, 14:00 — 16:00 (Room C2)

W2 - AES42 and Digital Microphones


Chair:
Helmut Wittek, SCHOEPS Mikrofone GmbH - Karlsruhe, Germany
Panelists:
Claudio Becker-Foss, DirectOut - Germany
Stephan Flock
Christian Langen
Stephan Peus
Gregor Zielinski, Sennheiser - Germany

Abstract:
The AES42 interface for digital microphones is not yet widely used. This can be due to the relatively new appearance of digital microphone technology, but also a lack of knowledge and practice with digital microphones and the corresponding interface. The advantages and disadvantages have to be communicated in an open and neutral way, regardless of commercial interests, on the basis of the actual need of the engineers.

Along with an available “White paper” about AES42 and digital microphones, which is aimed a neutral in-depth information and was compiled from different authors, the workshop intends to bring to light facts and prejudices on this topic.


Saturday, May 22, 15:30 — 17:00 (Room C1)

T2 - Mastering for Broadcast


Presenter:
Darcy Proper, Galaxy Studios - Belgium

Abstract:
Mastering has often had an aura of mystery around it. Those "in the know" have always regarded it as a vital and necessary last step in the process of producing a record. Those who have never experienced it have often had only a vague idea of what good mastering could achieve. However, the loudness race in recent years has put the mastering community under pressure; on one side from the producers or labels who want their product louder than the competition and on the other side from the artists or mixers who don’t want their work smashed into a lifeless "brick" and maxed out by excessive use of limiter plug-ins.

Darcy Proper is a multi-Grammy-winning mastering engineer whose golden ears (and hands) have put the finishing touches on a vast array of high profile records, among many others those from Steely Dan. She will talk about her approach to her work and will also demo examples with various degrees of compression with a legacy broadcast processor as the final piece of gear in the signal chain, simulating a radio broadcast. The audience is then able to experience the effects and artifacts that different compression levels will cause at the consumer's end.


Sunday, May 23, 09:00 — 11:00 (Room C2)

T3 - Hearing and Hearing Loss Prevention


Presenter:
Benj Kanters, Columbia College - Chicago, IL, USA

Abstract:
The Hearing Conservation Seminar and HearTomorrow.Org are dedicated to promoting awareness of hearing loss and conservation. This program is specifically targeted to students and professionals in the audio and music industries. Experience has shown that engineers and musicians easily understand the concepts of hearing physiology, as many of the principles and theories are the same as those governing audio and acoustics. Moreover, these people are quick to understand the importance of developing their own safe listening habits, as well as being concerned for the hearing health of their clients and the music-listening public. The tutorial is a 2-hour presentation in three sections: first, an introduction to hearing physiology; the second, noise-induced loss; and third, practicing effective and sensible hearing conservation.


Sunday, May 23, 09:00 — 11:00 (Room C1)

W4 - Blu-ray as a High Resolution Audio Format for Stereo and Surround


Chair:
Stefan Bock, msm-studios GmbH - Munich, Germany
Panelists:
Simon Heyworth, SAM - UK
Morten Lindberg, 2L - Norway
Johannes Müller, msm-studios - Germany
Crispin Murray, Metropolis - UK
Ronald Prent, Galaxy Studios - Belgium

Abstract:
The decision for the Blu-ray disc as the only HD packaged media format also offering up to 8 channels of uncompressed high resolution audio has at least eliminated one of the obstacles of getting high-res surround sound music to the market. The concept of utilizing Blu-ray as a pure audio format will be explained, and Blu-ray will be positioned as successor to both SACD and DVD-A. The operational functionality and a double concept of making it usable both with and without screen will be demonstrated by showing a few products that are already on the market.


Sunday, May 23, 10:00 — 12:00

TT2 - BBC – Broadcasting House


Abstract:
This impressive building in London W1 has been the headquarters of the BBC since 1932 and is under going a major redevelopment to ensure its future for the 21st century. Broadcasting House is the home of BBC National Radio and as well as its historic interior it houses a modern London Control Room, the main offices, and studios for Radio 3 (classical) and Radios 4 and 7 (speech). The building also contains a range of production studios, including Radio Drama facilities and the beautiful Art Deco Radio Theatre (concert hall) which was refurbished and subsequently reopened in 2006. Next door to Val Myer's Broadcasting House, work continues on the second phase of the development, which is on schedule to open in 2012 and will provide a modern extension to house BBC News and the BBC's World Service, making this central London site the largest, live broadcasting center in the world.

Capacity limited to 30 persons. Transportation by tube.


Price: Free

Sunday, May 23, 11:30 — 13:00 (Room C1)

T4 - CANCELLED



Sunday, May 23, 13:30 — 18:30 (Room C3)

P11 - Network, Internet, and Broadcast Audio


Chair: Bob Walker, Consultant - UK

P11-1 A New Technology for the Assisted Mixing of Sport Events: Application to Live Football BroadcastingGiulio Cengarle, Toni Mateos; Natanael Olaiz; Pau Arumí, Barcelona Media Innovation Centre - Barcelona, Spain
This paper presents a novel application for capturing the sound of the action during a football match by automatically mixing the signals of several microphones placed around the pitch and selecting only those microphones that are close to, or aiming at, the action. The sound engineer is presented with a user interface where he or she can define and move dynamically the point of interest on a screen representing the pitch, while the application controls the faders of the broadcast console. The technology has been applied in the context of a three-dimensional surround sound playback of a Spanish first-division match.
Convention Paper 8037 (Purchase now)

P11-2 Recovery Time of Redundant Ethernet-Based Networked Audio SystemsMaciej Janiszewski, Piotr Z. Kozlowski, Wroclaw University of Technology - Lower Silesia, Poland
Ethernet-based networked audio systems has become more popular among audio system designers. One of the most important issues that is available from networked audio system is the redundancy. The system can recover after different types of failures—cable or even device failure. Redundancy protocols implemented by audio developers are different than protocols known from computer networks, but both may be used in an Ethernet-based audio system. The most important attribute of redundancy in the audio system is a recovery time. This paper is a summary of research that was done at Wroclaw University of Technology. It shows the recovery time after different types of failures, with different network protocols implemented, for all possible network topologies in CobraNet and EtherSound systems.
Convention Paper 8038 (Purchase now)

P11-3 Upping the Auntie: A Broadcaster’s Take on AmbisonicsChris Baume, Anthony Churnside, British Broadcasting Corporation Research & Development - UK
This paper considers Ambisonics from a broadcaster’s point of view: to identify barriers preventing its adoption within the broadcast industry and explore the potential advantages were it to be adopted. This paper considers Ambisonics as a potential production and broadcast technology and attempts to assess the impact that the adoption of Ambisonics might have on both production workflows and the audience experience. This is done using two case studies: a large-scale music production of “The Last Night of the Proms” and a smaller scale radio drama production of “The Wonderful Wizard of Oz.” These examples are then used for two subjective listening tests: the first to assess the benefit of representing height allowed by Ambisonics and the second to compare the audience’s enjoyment of first order Ambisonics to stereo and 5.0 mixes.
Convention Paper 8039 (Purchase now)

P11-4 Audio-Video Synchronization for Post-Production over Managed Wide-Area NetworksNathan Brock, Michelle Daniels, University of California San Diego - La Jolla, CA, USA; Steve Morris, Skywalker Sound - Marin County, CA, USA; Peter Otto, University of California San Diego - La Jolla, CA, USA
A persistent challenge with enabling remote collaboration for cinema post-production is synchronizing audio and video assets. This paper details efforts to guarantee that the sound quality and audio-video synchronization over networked collaborative systems will be measurably the same as that experienced in a traditional facility. This includes establishing a common word-clock source for all digital audio devices on the network, extending transport control and time code to all audio and video assets, adjusting latencies to ensure sample-accurate mixing between remote audio sources, and locking audio and video playback to within quarter-frame accuracy. We will detail our instantiation of these techniques at a demonstration given in December 2009 involving collaboration between a film editor in San Diego and a sound designer in Marin County, California.
Convention Paper 8040 (Purchase now)

P11-5 A Proxy Approach for Interoperability and Common Control of Networked Digital Audio DevicesOsedum P. Igumbor, Richard J. Foss, Rhodes University - Grahamstown, South Africa
This paper highlights the challenge that results from the availability of a large number of control protocols within the context of digital audio networks. Devices that conform to different protocols are unable to communicate with one another, even though they might be utilizing the same networking technology (Ethernet, IEEE 1394 serial bus, USB). This paper describes the use of a proxy that allows for high-level device interaction (by sending protocol messages) between networked devices. Furthermore, the proxy allows for a common controller to control the disparate networked devices.
Convention Paper 8041 (Purchase now)

P11-6 Network Neutral Control over Quality of Service NetworksPhilip Foulkes, Richard Foss, Rhodes University - Grahamstown, South Africa
IEEE 1394 (FireWire) and Ethernet Audio/Video Bridging are two networking technologies that allow for the transportation of synchronized, low-latency, real-time audio and video data. Each networking technology has its own methods and techniques for establishing stream connections between the devices that reside on the networks. This paper discusses the interoperability of these two networking technologies via an audio gateway and the use of a common control protocol, AES-X170, to allow for the control of the parameters of these disparate networks. This control is provided by a software patchbay application.
Convention Paper 8042 (Purchase now)

P11-7 Relative Importance of Speech and Non-Speech Components in Program Loudness AssessmentIan Dash, Australian Broadcasting Corporation - Sydney, NSW, Australia; Mark Bassett, Densil Cabrera, The University of Sydney - Sydney, NSW, Australia
It is commonly assumed in broadcasting and film production that audiences determine soundtrack loudness mainly from the speech component. While intelligibility considerations support this idea indirectly, the literature is very short on direct supporting evidence. A listening test was therefore conducted to test this hypothesis. Results suggest that listeners judge loudness from overall levels rather than speech levels. A secondary trend is that listeners tend to compare like with like. Thus, listeners will compare speech loudness with other speech content rather than with non-speech content and will compare loudness of non-speech content with other non-speech content more than with speech content. A recommendation is made on applying this result for informed program loudness control.
Convention Paper 8043 (Purchase now)

P11-8 Loudness Normalization in the Age of Portable Media PlayersMartin Wolters, Harald Mundt, Dolby Germany GmbH - Nuremberg, Germany; Jeffrey Riedmiller, Dolby Laboratories Inc. - San Francisco, CA, USA
In recent years, the increasing popularity of portable media devices among consumers has created new and unique audio challenges for content creators, distributors as well as device manufacturers. Many of the latest devices are capable of supporting a broad range of content types and media formats including those often associated with high quality (wider dynamic-range) experiences such as HDTV, Blu-ray or DVD. However, portable media devices are generally challenged in terms of maintaining consistent loudness and intelligibility across varying media and content types on either their internal speaker(s) and/or headphone outputs. This paper proposes a nondestructive method to control playback loudness and dynamic range on portable devices based on a worldwide standard for loudness measurement as defined by the ITU. In addition the proposed method is compatible to existing playback software and audio content following the Replay Gain (www.replaygain.org) proposal. In the course of the paper the current landscape of loudness levels across varying media and content types is described and new and nondestructive concepts targeted at addressing consistent loudness and intelligibility for portable media players are introduced.
Convention Paper 8044 (Purchase now)

P11-9 Determining an Optimal Gated Loudness Measurement for TV Sound NormalizationEelco Grimm, Grimm-Audio; Esben Skovenborg, tc-electronic; Gerhard Spikofski, Institute of Broadcast Technology - Berlin, Germany
Undesirable loudness jumps are a notorious problem in television broadcast. The solution consists in switching to loudness-based metering and program normalization. In Europe this development has been led by the EBU P/LOUD group, working toward a single target level for loudness normalization applying to all genres of programs. P/LOUD found that loudness normalization as specified by ITU-R BS.1770-1 works fairly well for the majority of broadcast programs. However, it was realized that wide loudness-range programs were not well-aligned with other programs when using ITU-R BS.1770-1 directly, but that adding a measurement-gate provided a simple yet effective solution. P/LOUD therefore conducted a formal listening experiment to perform a subjective evaluation of different gate parameters. This paper specifies the method of the subjective evaluation and presents the results in term of preferred gating parameters.
Convention Paper 8154 (Purchase now)

P11-10 Analog or Digital? A Case-Study to Examine Pedagogical Approaches to Recording Studio PracticeAndrew King, University of Hull - Scarborough, North Yorkshire, UK
This paper explores the use of digital and analog mixing consoles in the recording studio over a single drum kit recording session. Previous research has examined contingent learning, problem-solving, and collaborative learning within this environment. However, while there have been empirical investigations into the use of computer-based software and interaction around and within computers, this has not taken into consideration the use of complex recording apparatus. A qualitative case study approach was used in this investigation. Thirty hours of video data was captured and transcribed. A preliminary analysis of the data shows that there are differences between the types of problems encountered by learners when using either an analog or digital mixing console.
Convention Paper 8045 (Purchase now)


Sunday, May 23, 14:00 — 16:00

TT4 - BBC – Maida Vale Studios


Abstract:
Built in 1909 as the “Maida Vale Roller Skating Palace and Club,” this building has been occupied by the BBC for the last 75 years. The site contains seven large studios, five of which are currently in use. This iconic building was home to the BBC Radiophonic Workshop, which was influential in early electronic music and the former studios are now the home of the operation to digitize the audio archives of the BBC. Maida Vale is perhaps best known for the “John Peel Sessions,” recorded in its rock and pop studios over several decades for Radio 1. Bands as diverse as The Beatles, Led Zeppelin, and Nirvana have recorded here, along with many World Music bands. The site is also the home of the BBC Symphony Orchestra and world-class Radio Drama production.
Capacity limited to 30 persons. Transportation by AES bus or cab.


Price: £15

Monday, May 24, 09:00 — 10:15 (Room C1)

T6 - Classical Music with Perspective


Presenter:
Sabine Maier, Tonmeister - Vienna, Austria

Abstract:
Concerts of classical music, as well as operas, have been a part of broadcast programing since the beginning of television. The aesthetic relationship between sound and picture plays an important part in the satisfactory experience of the consumer. The question how far the audio perspective (if at all!) should follow the video angle (or vice versa) has always been a subject of discussion among sound engineers and producers. In the course of a diploma work this aspect has been investigated systematically. One excerpt of the famous New Year's Concert (from 2009) has been remixed into four distinctly different versions (in stereo and surround sound). Close to 80 laymen who expressed an interest in classical music had the task of judging these versions to the same picture if they found the audio perspective appropriate to the video or not.

In this tutorial the experimental procedure as well as the results will be discussed. Examples of the different mixes will be played.


Monday, May 24, 14:00 — 15:45 (Room C1)

W11 - Surround for Sports


Presenters:
Martin Black, Senior Sound Supervisor & Technical Consultant, BSkyB - UK
Peter Davey, Audio Quality Supervisor at Beijing 2008 Olympics and Vancouver 2010 Olympics
Ian Rosam, 5.1 Audio Quality Supervisor for FIFA World Cup, Euro 2008, Beijing 2008 and Vancouver 2010 Olympics

Abstract:
Surround for Sports is an increasingly important area of multichannel audio. It is a de-facto standard for large-scale productions like the Olympics or the football world and European championships.

The presenters, both experienced audio supervisors for such events, will touch on a variety of subjects regarding surround sound design for sports and many practical issues like: crowd/audience; field of play FX; game sounds, e.g., ball kicks; competitors, e.g., curling; referees/umpires, e.g., rugby/tennis; board sounds, e.g., darts/basketball; scoring/timing, e.g., fencing buzzers, boxing time bell; commentators/reporters out of vision; commentators/presenters/reporters in vision. Where those elements should be in a 5.1 mix will be discussed as well as: use of center channel, use of the LFE, HDTV stereo fold-down in a set top box, Dolby E, metadata, bass management, and last but not least localization of sounds with 5.1 and human hearing.


Monday, May 24, 16:00 — 18:00 (Room C2)

W13 - Loudness in Broadcasting—The New EBU Recommendation R128


Chair:
Andrew Mason, BBC R&D
Panelists:
Jean-Paul Moerman, Salzbrenner Stagetec Media Group - Buttenheim, Germany
Richard van Everdingen, Dutch Broadcasting Loudness Committee

Abstract:
The EBU group P/LOUD is approaching the final stage of its work that will
result in recommendations that will have a profound effect on any audio
production in broadcasting. The gradual switch from peak to loudness normalization
combined with a new maximum true peak level and the usage of the descriptor
“loudness range” allows for the first time to fully characterize the audio part
of a program. More importantly it has the potential to solve the most frequent
complaint of the listeners, that of severe level inconsistencies. This is the first time that the new EBU loudness recommendation R128 is presented in detail, alongside a detailed introduction to the subject as well as practical case studies.


Tuesday, May 25, 09:00 — 10:45 (Room C1)

W15 - Single-Unit Surround Microphones


Chair:
Eddy B. Brixen, EBB-Consult - Smørum, Denmark
Panelists:
Mikkel Nymand, DPA Microphones
Mattias Strömberg, Milab - Helsingborg, Sweden
Helmut Wittek, SCHOEPS Mikrofone GmbH

Abstract:
The workshop will present available single-unit surround sound microphones in a kind of shoot out. There are a number of these microphones available and more units are on their way. These microphones are based on different principles. However, due to their compact sizes there may/may not be restrictions to the performance. This workshop will present the different products and the ideas and theories behind them.


Tuesday, May 25, 11:15 — 12:15 (Room C2)

T10 - ADR


Presenter:
Dave Humphries, Loopsync

Abstract:
ADR is becoming more necessary than ever. Noisy locations, special effects, and dialog changes are a major part of any drama production. Wind noise, generators, lighting chokes, aircraft, traffic, and rain can all be eliminated by good ADR.
In this tutorial Dave Humphries will discuss the reasons why we need to put actors
through this and how we can help minimize the need. What a Dialog Editor needs to know and how to help actors succeed at ADR. He will also demonstrate the art of recording ADR live.