AES London 2010
Saturday, May 22, 14:00 — 15:30
T1 - Do-it-Yourself Semantic Audio
Jörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
Content-based music information retrieval (MIR) and similar applications require advanced algorithms that often overburden non-expert developers. However, many building blocks are available—mostly for free—to significantly ease software development, for instance of similarity search methods, or to serve as components for ad-hoc solutions, for instance in forensics or linguistics. This tutorial looks into software libraries/frameworks (e.g., MARSYAS
and CLAM), toolboxes (e.g., MIRtoolbox), Web-based services (e.g., Echo
Nest), and stand-alone software (e.g., The Sonic Visualizer) that help
with the extraction of audio features and/or execute basic machine
learning algorithms. Focusing on solutions that require little to no programming in the classical sense, the tutorial's major part consists in live demos of hand-picked routes to roll one's own semantic audio application.
Saturday, May 22, 15:30 — 17:00 (Room C1)
T2 - Mastering for Broadcast
Darcy Proper, Galaxy Studios - Belgium
Mastering has often had an aura of mystery around it. Those "in the know" have always regarded it as a vital and necessary last step in the process of producing a record. Those who have never experienced it have often had only a vague idea of what good mastering could achieve. However, the loudness race in recent years has put the mastering community under pressure; on one side from the producers or labels who want their product louder than the competition and on the other side from the artists or mixers who don’t want their work smashed into a lifeless "brick" and maxed out by excessive use of limiter plug-ins.
Darcy Proper is a multi-Grammy-winning mastering engineer whose golden ears (and hands) have put the finishing touches on a vast array of high profile records, among many others those from Steely Dan. She will talk about her approach to her work and will also demo examples with various degrees of compression with a legacy broadcast processor as the final piece of gear in the signal chain, simulating a radio broadcast. The audience is then able to experience the effects and artifacts that different compression levels will cause at the consumer's end.
Sunday, May 23, 09:00 — 11:00 (Room C2)
T3 - Hearing and Hearing Loss Prevention
Benj Kanters, Columbia College - Chicago, IL, USA
The Hearing Conservation Seminar and HearTomorrow.Org are dedicated to promoting awareness of hearing loss and conservation. This program is specifically targeted to students and professionals in the audio and music industries. Experience has shown that engineers and musicians easily understand the concepts of hearing physiology, as many of the principles and theories are the same as those governing audio and acoustics. Moreover, these people are quick to understand the importance of developing their own safe listening habits, as well as being concerned for the hearing health of their clients and the music-listening public. The tutorial is a 2-hour presentation in three sections: first, an introduction to hearing physiology; the second, noise-induced loss; and third, practicing effective and sensible hearing conservation.
Sunday, May 23, 11:30 — 13:00 (Room C1)
T4 - CANCELLED
Sunday, May 23, 16:00 — 18:00 (Room C2)
T5 - Spatial Audio Reproduction: From Theory to Production
Frank Melchior, IOSONO GmbH - Erfurt, Germany
Sascha Spors, Deutsche Telekom AG Laboratories - Berlin, Germany
Advanced high-resolution spatial sound reproduction systems like Wave Field Synthesis (WFS) and Higher-Order Ambisonics (HOA) are being used increasingly. Consequently more and more material is being produced for such systems. Established channel-based production processes from stereophony can only be applied to a certain extent. In the future, a paradigm shift toward object-based audio production will have to take place in order to cope for the needs of systems like WFS. This tutorial spans the bridge from the physical foundations of such systems, over their practical implementation toward efficient production processes. The focus lies on WFS, however the findings will also be applicable to other systems. The tutorial is accompanied by practical examples of object-based productions for WFS.
Monday, May 24, 09:00 — 10:15 (Room C1)
T6 - Classical Music with Perspective
Sabine Maier, Tonmeister - Vienna, Austria
Concerts of classical music, as well as operas, have been a part of broadcast programing since the beginning of television. The aesthetic relationship between sound and picture plays an important part in the satisfactory experience of the consumer. The question how far the audio perspective (if at all!) should follow the video angle (or vice versa) has always been a subject of discussion among sound engineers and producers. In the course of a diploma work this aspect has been investigated systematically. One excerpt of the famous New Year's Concert (from 2009) has been remixed into four distinctly different versions (in stereo and surround sound). Close to 80 laymen who expressed an interest in classical music had the task of judging these versions to the same picture if they found the audio perspective appropriate to the video or not.
In this tutorial the experimental procedure as well as the results will be discussed. Examples of the different mixes will be played.
Monday, May 24, 14:00 — 15:45 (Room C6)
T7 - Screen Current Induced Noise (SCIN), Shielding, and Grounding—Myths vs. Reality
Bruce C. Olson, Olson Sound Design - Brooklyn Park, USA
John Woodgate, J M Woodgate and Associates - Rayleigh, UK
Since the landmark series of AES Journal articles on Screen Current Induced Noise (SCIN), Shielding, and the Pin 1 problem, were published in June of 1995 there has been much discussion of the results and their implications for audio systems. This tutorial will demonstrate SCIN and show a Spice model that helps to explain what is happening. These effects are often confused with grounding and shielding schemes so we will also explain which interactions are real and which ones are mythical.
Monday, May 24, 14:00 — 15:45 (Room C2)
T8 - Les Paul: We Use His Innovations Every Day
The many contributions of the late Les Paul to the art and technology of recording were often overlooked in media coverage of his passing last year at the age of 94. While his importance as a musician and as an innovator of the solid body electric guitar certainly deserve wide praise and respect, his developments in recording continue to be used by producers, engineers and musicians at virtually every recording session. Producer and educator Barry Marshall looks at the career of Les Paul and plays some of his ground-breaking recordings as a sideman, as a solo artist, and as a part of the Les Paul and Mary Ford duo act. Special emphasis in the presentation will be placed on the way that Les' musicianship and musical instincts drove his technical breakthroughs.
Tuesday, May 25, 09:00 — 10:45 (Room C2)
T9 - Compression FX—Use Your Power for Good, Not Evil
Alex U. Case, University of Massachusetts Lowell - Lowell, MA, USA
Dynamic range compression, so often avoided by the purist and damned by the press, is most enthusiastically embraced by pop music creators. As an audio effect, it can be easily overused. Reined-in it can be difficult to perceive. It is always difficult to describe. As a tool, its controls can be counterintuitive, and its meters and flashing lights uninformative. In this tutorial—rich with audio examples—Case organizes the broad range of iconic effects created by audio compressors as they are used it to reduce and control dynamic range, increase perceived loudness, improve intelligibility and articulation, reshape the amplitude envelope, add creative doses of distortion, and extract ambience cues, breaths, squeaks, and rattles. Learn when pop engineers reach for compression, know what sort of parameter settings are used (ratio, threshold, attack, and release), and advance your understanding of what to listen for and which way to tweak.
Tuesday, May 25, 11:15 — 12:15 (Room C2)
T10 - ADR
Dave Humphries, Loopsync
ADR is becoming more necessary than ever. Noisy locations, special effects, and dialog changes are a major part of any drama production. Wind noise, generators, lighting chokes, aircraft, traffic, and rain can all be eliminated by good ADR.
In this tutorial Dave Humphries will discuss the reasons why we need to put actors
through this and how we can help minimize the need. What a Dialog Editor needs to know and how to help actors succeed at ADR. He will also demonstrate the art of recording ADR live.
Tuesday, May 25, 16:00 — 17:30 (Room C6)
T11 - Loudspeakers and Headphones
Wolfgang Klippel, Klippel GmbH - Dresden, Germany
Distributed mechanical parameters describe the vibration and geometry of the sound radiating surface of loudspeaker drive units. This data is the basis for predicting the sound pressure output and a decomposition of the total vibration into modal and sound pressure related components. This analysis separates acoustical from mechanical problems, shows the relationship to the geometry and material properties, and gives indications for practical improvement. The tutorial combines the theoretical background with practical loudspeaker diagnostics illustrated on various kinds of tranducers such as woofer, tweeter, compression driver, microspeaker, and headphones.