AES Vienna 2007
Home Visitors Exhibitors Press Students Authors
Technical Program
Tutorial Sessions

Spotlight on Broadcasting

Spotlight on Live Sound

Spotlight on Archiving

Detailed Calendar

Convention Planner

Paper Sessions



Exhibitor Seminars

Application Seminars

Special Events

Student Program

Technical Tours

Technical Council

Standards Committee

Heyser Lecture

AES Vienna 2007
Tutorial Session Details

Saturday, May 5, 14:00 — 16:00


Jeff Levison

The increasing popularity of multichannel playback systems has required the development of a variety of recording and mixing techniques. Artists and engineers are more aware of the impact of speaker placement on the balance between envelopment and imaging— especially for the surround channels d for the reproduction (or illusion) of reverberation and other acoustic environmental aspects. Increasing the number of channels reproduced can ease the dilemma of this envelopment/imaging compromise, and now new high definition playback methods, such as Blu-ray and HD DVD, offer eight channels of discrete reproduction with simultaneous high quality video. Besides using these extra channels for extra surround channels, the possibility exists for the inclusion of height channels for greater spaciousness and vertical pan positioning. This tutorial will present a group of realized examples of 7.1 with four surround channels and alternate mixes with 5.1 plus height.

Comparisons will be made between stereo, 5.1, and these new 7.1 mixes. Proposals of other higher order systems and the possible improvements to three-dimensional audio presentation will be discussed.

Saturday, May 5, 15:00 — 18:00


Ron Streicher

What is stereo? Why and how do we hear with spatial acuity? How can we realistically capture and reproduce the stereo sound field with just two microphones and two loudspeakers? These are but a few of the questions discussed in this in-depth tutorial.

The session begins with a discussion and demonstration of how the human ear-brain hearing system works. This is followed by a historical overview of the development stereophonic recording. The main body of the session presents a comprehensive analysis of the various common stereophonic microphone configurations and concludes with numerous recorded examples for evaluation and comparison of the techniques discussed.

Saturday, May 5, 16:30 — 18:30


Jeff Levison

For the new Blu-ray and HD-DVD systems remapping of loudspeakers is possible. What this means is that a given mix loudspeaker position can be described and then on playback the room loudspeaker positions are used to balance the power distribution if there is a difference between the source and the playback speaker positions (or the number of speakers). Several examples will be played to demonstrate various remapping and downmixing scenarios in a real-world manner.

Sunday, May 6, 09:00 — 10:30


Dietrich Schüller, Phonogrammarchiv Vienna

The world's audio heritage is estimated to amount to 100 million hours of recorded documents. A considerable part is at severe risk of not surviving in the long-term, as it is still kept on analog or digital single carriers, which sooner or later are prone to deterioration. An even greater threat is the fast withdrawal from the manufacture of specific replay equipment and spare parts. This will lead to a situation where still well-preserved recordings cannot be retrieved anymore because of the lack of replay equipment.

This tutorial concentrates on two basic documents for long-term audio preservation, released by the Technical Committee of IASA, the International Association of Sound and Audiovisual Archives:
• IASA-TC 03, The Safeguarding of the Audio Heritage: Ethics, Principles and Preservation Strategy
• IASA-TC 04, Guidelines on the Production and Preservation of Digital Audio Objects

The tutorial will also survey the respective AES Standards that concentrate on the storage and handling of various types of audio carriers.

Sunday, May 6, 11:00 — 12:30


Tim Harris, Snell & Willcox

This tutorial takes an audio-oriented look at the MXF file format. It will start off with a general introduction—to the basics of MXF—what MXF is, why it was invented, by whom it was developed, and how it was standardized. The tutorial will then focus on the MXF synchronization model, and the capabilities of the format to combine audio, video, data and metadata in a versatile way.

Carriage of essence within MXF will then be explained with a particular focus on audio. Attention will be drawn to the involvement of the AES in providing the underlying international reference for the Broadcast Wave format. Metadata annotation of essence, particularly recent work on the MXF Master Format Guidelines for multi-lingual annotation, will be explained.

The talk will be interspersed with demonstrations of how MXF software could be used in real-world workflows.

Sunday, May 6, 13:00 — 15:30


Dennis Baxter, Audio for the Olympics
Akira Fukada, NHK Tokyo
Gaute Nistov, NRK Oslo

Three experts will relate stories of their individual experiences in broadcasting.

Dennis Baxter(14:00) will tell of his experience broadcasting the Olympics. The Olympics uses production teams from all over the world including the host country, which is currently China. Most of these production teams are considered to be the best at their particular sports coverage. For example YLE (Finland) and NRK (Norway) produce Cross Country Skiing and Winter Biathlon, the BBC has produced Tennis and New Zealand covers sailing. The host country, 2004 – Greece, 2006 – Italy, 2008 – China is favored by the Host Broadcaster to participate as much as possible but often these broadcaster lack the experience. One of the greatest challenges in the broadcast production of the Olympics is a consistency in production. With sound mixers from over 30 different countries they bring with them various levels of technical skills and personal experience that influence the way the sound is produced. Additionally sound mixing is very personal and subjective and not easily definable as to what is right or wrong. The following factors influence sound mixing: (1) Cultural interpretation of television production; (2) Psychological – Television has been dominated by video and engineers and technicians who do not understand audio. Often there has been a lack of resources and support, and sound engineers sometimes just give up! (3) Personal prejudices – Most North American sound mixers disapprove of live sound sweetening; (4) Ego; (5) Experience. The presentation will explore these areas and the subjectivity of sound production.

Gaute Nistov's (13:00) topic is location recording from the bottom of the North Sea to the Pyramids of Egypt. On the 2nd of October 2006 singer Katie Melua performed a concert over 300 meters below sea level inside one of the concrete shafts that anchors the Troll gas rig to the sea bed. In addition to being Europe’s highest selling European female artist last year this gig in the North Sea also secured Katie a world record for the deepest underwater concert. The special acoustic properties of the shaft and the very strict security measures on the platform were among the challenges for this extraordinary production.
Only weeks later in Cairo a performance of Norwegian playwright Henrik Ibsen’s “Peer Gynt” was staged in front of the Great Pyramids of Giza. The combined effort of more than 30 actors and singers, the Cairo Symphony Orchestra, and a 60-plus strong choir at the outdoor arena, posed a very different set of requirements for a sound production that had to accommodate both a live transmission locally on the night as well as recording for postproduction. Nistov was in charge of the TV-sound production on both occasions and will discuss the technical solutions used with an emphasis on production planning.

Akira Fukada (14:45) talks of his challenges broadcasting two concerts in Japan. Two special concerts will be presented having taken place in demanding places in Japan: One of them is the concert at which the “field of summer” was performed at the city center of Hiroshima. This is a concert which looks back upon the 60th anniversary of the atomic bombing. The piece was performed at the exact place where the atomic bomb of Hiroshima was dropped. For Japan, that is a holy place. Therefore, there are many regulations and the concert was held in the severe environment of hot summer. It was not only broadcast live in 5.1 surround sound, but offered simultaneously all over the world on the Internet. The second concert took place inside of a mountain. This mountain has a huge base rock and the sound performed there produces characteristic reverberation.
Composer Isao Tomita and Fukada planned the concert using the sound properties of this space. First, music was performed by allotting a player to various places of a mountain. And those sounds are projected to the base rock using PA. The reflective sound is recorded with the surround microphone installed in the space. The performance had a destinctive sound due to this mountain’s effect. However, during this concert it, unfortunately, rained; nevertheless the sound of the rain made for an exceptional sound effect caught by the surround microphone.

Sunday, May 6, 16:00 — 18:00


Simon Bishop, Freelance Sound Recordist - UK
Richard Merrick, Freelance Sound Recordist - UK

Simon Bishop and Richard Merrick contrast location audio acquisition from both ends of the budget spectrum. Discussing and comparing techniques and tricks collectively acquired from 60 man-years of experience, from being awash with money and equipment to begging, borrowing, and hunting on eBay. Light-hearted, but informative, and both will prove that it's not the size of your nail, but the skill of the guy with the hammer!

Monday, May 7, 11:30 — 13:00


Chris Woolf, Broadcast Engineering Systems, Ltd.

Synchronization and timecode—intimately connected but not to be confused with each other—form the time-axis shells of buildings within which most sound practitioners must house their work. Rules-of-thumb, tricks-that-seem-to-work, and even blind faith often support rather shaky structures so this tutorial provides some underpinning: a foundation of solid bricks.

The session presumes very little and will be useful to those with limited experience. However, it will also appeal to those with gnarled hands and a lot of dust under their fingernails but who harbour secret doubts about the security of their techniques—dark glasses and a false moustache may be worn.

Monday, May 7, 14:00 — 16:00


Bill Whitlock, Jensen Transformers, Inc. - Chatsworth, CA, USA

One goal in the design of audio equipment is to maintain a high signal-to-noise ratio. But audio equipment most often operates on utility AC power, which, even under ideal conditions, normally creates ground voltage differences, magnetic fields, and electric fields. RF energy is increasingly omnipresent, too. Balanced interfaces are capable of conveying wide dynamic range analog audio signals while giving them unrivaled immunity to interference. Realizing this full capability in real-world, mass-produced equipment is not necessarily costly but requires some understanding of several common mistakes made by equipment designers. The telephone company pioneered the widespread use of balanced lines and for 50 years virtually all audio equipment used transformers at its balanced inputs and outputs—their high noise rejection was taken for granted.

When solid-state differential amplifiers began replacing transformers, most designers failed to recognize the importance of common-mode impedances—which are solely responsible for noise rejection. Instead, most believed that “balance” meant equal and opposite signal swings—which is a myth. As a result, most modern audio equipment has poor noise rejection when operating in real-world systems, even though it may have
impressive rejection in a laboratory test. The IEC recognized this dichotomy when they revised their CMRR test standards in 2000 (at the urging of this author). A new IC uses bootstrap techniques to raise its common-mode impedances, and real-world noise
rejection, to levels comparable to the finest transformers.

The three basic types of balanced output circuits, each with a peculiar set of tradeoffs, must be accommodated by balanced input circuits. Further, certain cable constructions and shield connections can degrade noise rejection of an otherwise perfect interface. A very common equipment design error, the “pin 1 problem,” causes shield connections to behave as low-impedance audio inputs, allowing power-line noise and RF interference to enter the signal path.

Tuesday, May 8, 11:30 — 13:30


Bill Whitlock, Jensen Transformers, Inc. - Chatsworth, CA, USA

Many designers and installers of audio/video systems think of grounding and interfacing as a “black art.” Do signal cables really “pick up” noise, presumably from the air like a radio receiver? Equipment manufacturers, installers, and users rarely understand the real sources of system noise and ground loop problems, routinely overlooking or ignoring basic laws of physics. Although myth and misinformation are epidemic, this tutorial brings insight and knowledge to the subject. Signals accumulate noise and interference as they flow through system equipment and cables. Both balanced and unbalanced interfaces transport signals but are also vulnerable to coupling of interference from the power line and other sources. The realities of ac power distribution and safety are such that some widely used noise reduction strategies are both illegal and dangerous. Properly wired, fully code-compliant systems always exhibit small but significant residual voltages between pieces of equipment as well as tiny leakage currents that flow in signal cables. The unbalanced interface has an intrinsic problem, common-impedance coupling, making it very vulnerable to noise problems. The balanced interface, because of a property called common-mode rejection, can theoretically nullify noise problems. Balanced interfaces are widely misunderstood and their common-mode rejections suffer severe degradation in most real-world systems. Many pieces of equipment, because of an innocent design error, have a built-in noise coupling mechanism dubbed the “pin 1 problem” by Neil Muncy. A simple troubleshooting method that uses no test equipment will be described. It can pinpoint the exact location and cause of system noise. Most often, devices known as ground isolators are the best way to eliminate noise coupling. Signal quality and other practical issues are discussed as well as how to properly connect unbalanced and balanced interfaces to each other. While immunity to RF interference is a part of good equipment design, it must often be provided externally. Finally, power line treatments such as technical power, balanced power, power isolation transformers, and surge suppression are discussed.

Tuesday, May 8, 12:30 — 14:30


Mathias Coinchon, EBU
Lars Jonsson, EBU/Swedish Radio
Gregory Massey, APT Ltd. - Ireland

Audio over IP end units have become common in radio and tv operations for streaming of audio over IP networks, from remote sites or local offices into main studio centres. ISDN is gradually replaced by IP-circuits.

The IP networks used can be well-managed private networks with controlled quality of service. The Internet is increasingly also used for various cases of radio contribution, especially over longer distances. Radio correspondents will have the choice in their equipment to use either ISDN or Internet via ADSL to deliver their reports. In France even distribution to FM transmitters via IP over well managed MPLS-networks is planned to replace older circuits.

More than 15 manufacturers provide equipment for these applications.

With almost no exceptions, end units coming from one manufacturer today are not compatible with another company's unit. Based on an initiative coming from German vendors and broadcasters, the European Broadcasting Union, EBU, has started a project group, N/ACIP, Audio Contribution over IP, to suggest a method of create interoperability for audio over IP. A draft standard has been proposed by the EBU. Some manufacturers already have begun implementation as a minimum interoperability option.

The tutorial will cover the standardization process and give a basic overview of audio over IP.

Tuesday, May 8, 14:00 — 17:00


Wolfgang Klippel, Klippel GmbH

This tutorial addresses the relationship between nonlinear distortion measurements and nonlinearities that are the physical causes for signal distortion in loudspeakers, headphones, micro-speakers, and other transducers. Using simulation techniques characteristic symptoms are identified for each nonlinearity and presented systematically in a guide for loudspeaker diagnostics. This information is important for understanding the implications of nonlinear parameters and for performing measurements that describe the loudspeaker more comprehensively. The practical application of the new techniques are demonstrated on practical examples.

Tuesday, May 8, 15:00 — 17:00


Lars Jonsson, EBU/Swedish Radio
Gregory Massey, APT Ltd. - Ireland
Gerhard Stoll, IRT - Munich, Germany

Audio quality in new low bit-rate distribution systems—with a broadcasters cascade perspective with many re-encodings at too low bit rates. What is the solution to this problem?

In all new digital distribution systems broadcasters are faced with the problem of using perceptual coding systems in all stages of the broadcasting chain, from early capturing to contribution over editing and on-air. Systems in the home are now also using recording with digital media with low bit rate systems. The resulting overall quality is often degraded by the cascading artifacts in more than five steps of coding and re-encoding.

This tutorial discusses state of the art methods and listening test results within the EBU to overcome these problems.