AES Conventions and Conferences

  Return to 119th
  Registration
  Exhibition
  Calendar
  4 Day Planner
  Paper Sessions
  Workshops
  Broadcast Events
  Tutorials
  Master Classes
  Live Sound Seminars
  Exhibitor Seminars
  Training Sessions
  Student Program
  Historical Events
  Special Events
  Technical Tours
  Heyser Lecture
  Tech Comm Mtgs
  Standards Mtgs
  Travel Information
  Press Information
  Student Volunteers

AES New York 2005
Tutorial Session Details


Friday, October 7, 9:00 am — 12:00 pm

T1 - The Acoustics and Psychoacoustics of Loudspeakers in Small Rooms: A Review

Presenter:
Floyd Toole, Harman International Industries, Inc. - Northridge, CA, USA

Abstract:
The physical measures by which acousticians evaluate the performance of rooms have evolved in performance spaces—concert halls, opera houses, and auditoria. They rely on a set of assumptions that become progressively less valid as spaces get smaller and more acoustically absorptive. In live performances, sound sources radiate in all directions and the room is a part of the performance. In sound reproduction, loudspeakers tend to exhibit significant directivity, and what we hear should ideally be independent of the listening room. What, then, should we measure in small rooms? What configuration of loudspeakers and acoustical treatment is appropriate for multichannel audio reproduction? To what extent can we "eliminate" the room? Or, do we need to? Is there a point beyond which the human hearing system is able to adapt to the listening space—hearing "through" the room and "around" the reflections to accurately perceive the source? A certain amount of the right kind of reflected sound appears to enhance the music listening experience and, interestingly enough, to improve speech intelligibility. In this tutorial we review some of the basic science, using existing knowledge to provide guidance for choosing and using loudspeakers in rooms, and pointing out gaps in our knowledge— subjects for future research.


Friday, October 7, 9:00 am — 12:00 pm

T2 - Audio System Grounding and Interfacing—An Overview

Presenter:
Bill Whitlock, Jensen Transformers, Inc. - Chatsworth, CA, USA

Abstract:
Many audio professionals think of grounding and interfacing as a “black art.” This tutorial replaces myth and misinformation with insight and knowledge, revealing the true sources of system noise and ground loops. Signals accumulate noise and interference as they flow through system equipment and cables. Both balanced and unbalanced interfaces transport signals but are also vulnerable to coupling of interference from the power line and other sources. The realities of ac power distribution and safety are such that some widely used noise reduction strategies are both illegal and dangerous. Properly wired, fully code-compliant systems always exhibit small but significant residual voltages between pieces of equipment as well as tiny leakage currents that flow in signal cables. The unbalanced interface has an intrinsic problem, common-impedance coupling, making it very vulnerable to noise problems. The balanced interface, because of a property called common-mode rejection, can theoretically nullify noise problems. Balanced interfaces are widely misunderstood and their common-mode rejection suffers severe degradation in most real-world systems. Many pieces of equipment, because of an innocent design error, have a built-in noise coupling mechanism. A simple, no-test-equipment, system troubleshooting method will be described. It can pinpoint the exact location and cause of system noise. Most often, devices known as ground isolators are the best way to eliminate noise coupling. Signal quality and other practical issues are discussed as well as how to properly connect unbalanced and balanced interfaces to each other. While immunity to RF interference is a part of good equipment design, it must often be provided externally. Finally, power line treatments such as technical power, balanced power, power isolation transformers, and surge suppression are discussed.


Friday, October 7, 1:30 pm — 3:30 pm

T3 - Analog Design in a Digital Environment

Presenters:
Dennis Bohn, Rick Jeffs, & Paul Mathews, Rane Corporation - Mukilteo, WA, USA

Abstract:
This tutorial presents a fast-paced overview of the problems faced by an analog audio designer working in the mixed analog-digital environment found in most pro audio products. A typical mixed analog-digital audio product is examined with respect to the analog design elements necessary to maintain pristine audio performance while satisfying international EMC and safety compliance. Topics include how to bring in low-level signals, maintain fidelity and SNR, provide high gain and buffering, supply clean power, and deliver high quality signals on the output side, all within the context of a hostile environment both inside and outside the product housing. Examples of gotchas and do’s and don’ts in chassis design, circuit design, and circuit board layout highlight the session.


Friday, October 7, 1:30 pm — 3:30 pm

T4 - Music Surround Mixing: A Matter of Perspective

Presenters:
Jeff Levison
Ronald Prent

Abstract:
With the increasing popularity of surround music, mixing techniques have been evolving to include a variety of audio perspective choices for the mixing engineer. The tutorial will examine several approaches to mixing multichannel popular music and evaluating artistic versus technical goals. Catalog remixes and new recordings mixed in various ways, plus examples in mono, stereo, Quad, and 5.1 will be used to illustrate the listener's perspective.


Friday, October 7, 3:30 pm — 6:00 pm

T5 - Dynamic Range Compression—A Real World Users Guide

Presenter:
Alex U. Case, University of Massachusetts Lowell - Lowell, MA, USA

Abstract:
Compression (of audio, not data) confounds many recording engineers, from rookies to veterans. As an audio effect it can be difficult to hear and even more difficult to describe. As a tool its controls can be counterintuitive and its meters and flashing lights uninformative. This tutorial organizes the broad range of effects created by audio compressors, as audio engineers use it to reduce/control dynamic range, increase perceived loudness, improve intelligibility and articulation, reshape the amplitude envelope, add creative doses of distortion, and extract ambience cues, breaths, squeaks, and rattles. Attendees will learn when to reach for compression, know a good starting place for compression parameters (ratio, threshold, attack, and release), and advance their understanding of what to listen for and which way to tweak.


Friday, October 7, 4:30 pm — 6:30 pm

T6 - Designing with Delta-Sigma Converters

Presenter:
Steve Green, Cirrus Logic, Inc. / Crystal Semiconductor - Austin, TX, USA

Abstract:
The performance of integrated analog-to-digital and digital-to-analog converters integrated circuits continue to improve as new techniques and processes become available to the IC design engineer. Many subtleties must be understood and addressed in order to realize optimal performance of these devices.


Friday, October 7, 6:00 pm — 8:00 pm

T7 - DJ Mixing: Are You Ready For The Transition

Presenters:
Richie Hawtin “PlastikMan”
Ronald Prent

Abstract:
A journey into the fascinating world of samples and loop-based music with original sounds to create new Techno-Lounge Music in Surround using the advantages of a digital worksstation and the analog technology to create the next step in Techno-Music!

This tutorial will show an exciting new world were Techno-Music meets Surround “Artistry” and explores its endless creative possibilities for making new music. We will show how the project evolved and demonstrate the workflow. At the end of the tutuaral we will play a 20 minute part of the project so you can emerge in to the music and make the “Transition.”


Saturday, October 8, 9:00 am — 11:00 am

T8 - Chamber Reverb—D.I.Y. : Send Your Snare to the Stairwell, Your Kalimba to the Kitchen, and Your Bassoon to the Bathroom

Presenter:
Alex U. Case, University of Massachusetts Lowell - Lowell, MA, USA

Abstract:
Any live space becomes a reverb chamber if you are willing to make the effort. Careful placement of loudspeakers creates the reverb send, and microphones provide the reverb return. This tutorial reviews the basics of a good reverb chamber—the architecture, the signal flow, the equipment, the measurements, and most of all the sound. Informed by this review of some of the most important chambers in the history of pop production, learn how to bring it all home. Your productions deserve the uniqueness and richness that only a live chamber can offer. Move your mixes out of the box by applying effects not available for purchase in any box. You alone have access to the spaces around your studio. Turn any live space— be they stairwells, kitchens, or bathrooms—into a source of reverberation that becomes part of your own signature sound.


Saturday, October 8, 11:00 am — 1:00 pm

T9 - Loudspeaker Basics and Planar-Magnetics

Presenter:
David Clark, DLC Design - Wixom, MI, USA

Abstract:
The first half of the tutorial covers the transducer, enclosure and listening environment basics:
• Two coupled transducers (motor and radiator)
• Motor types—magnetic, electrostatic, piezoelectric
• Radiator types—cone, membrane, panel, horn
• Simplified physics and energy flow
• Enclosures
• Listening environment

The second half of the tutorial will investigate planar-magnetic transducers in some depth:
• History of planar magnetic speakers
• Ribbon vs. planar-magnetic
• How they work—physical arrangement
• Why they have a fanatic following
• What are the problems in making them work
• Room interface
• Sub-woofers and super-tweeters


Saturday, October 8, 1:00 pm — 3:00 pm

T10 - Preservation, Archiving, and Restoration: A Look at Practical Application

Presenters:
David Ackerman, Harvard College Library Audio Preservation Services - Cambridge, MA, USA
Peter Alyea, Library of Congress - Washington D.C., USA
Chris Lacinak, VidiPax, LLC - Long Island City, NY, USA

Abstract:
This tutorial session will approach the practical application of three fundamentals associated with archiving preservation and restoration. These are reproduction, digitization, and metadata.

Reproduction

Faithful reproduction of source content is the overarching goal of reformatting. Faithful sonic reproduction is achieved by restoring the physical medium to its original condition. Although it may be expedient, shortcutting this labor-intensive phase is ultimately detrimental to the content. Any compromises made during these steps can affect the integrity of the transferred content to the detriment of future preservation and, of course, the value of the asset. We will look at diagnosis and treatment methods associated with media that is commonly found in sound archives.

Digitization

As archives rapidly reformat content from physical carriers toward digital systems the bridge used to make that transition and the systems that manage the content carry a great burden. Ensuring and maintaining integrity are simplistic in concept but difficult in practice. We will explore the practical application of digitization and the digital archive from a system-wide perspective.

Metadata

Without metadata there is no preservation in the digital archive. There is the matter of the content, as well as the relationships of the audio to other audio files in a project, a collection, and the archive itself. There are the technical characteristics of the file that must be known to retrieve the audio properly and the documentation of the work history behind the creation of the audio file. This presentation will explore the Harvard College Library’s use of the Harvard Digital Repository Service (DRS) for the preservation of unique and rare audio recordings.


Saturday, October 8, 4:00 pm — 6:00 pm

T11 - Loudness

Presenters:
Emil Torick, CBS, Retired - USA
Marvin Caesar, Aphex Systems - USA
Rachel Cruz, House Ear Institute - USA
Mike Dorrough, Dorrough Electronics - USA
Frank Foti, Omnia/Telos Systems - USA
James Johnston, Microsoft Corp. - USA
Bob Katz, Digital Domain - USA
Thomas Lund, tc electronics - Denmark

Abstract:
Loudness, appropriate or otherwise, has always been a key attribute of any audio program, yet it remains difficult to quantify, attain, and control. Being a perceptual quality, a loudness measure must be based on comprehensive human reference tests. The same as our ears, a useful loudness measure should work in real time on music as well as on speech plus a wide variety of signals. It should also be possible to read and act upon by a person without an audio background. Such a measure will serve music and audio quality in general well, because it will not be fooled the same way today's level detectors are fooled to accept heavy distortion on CDs and commercials. Today's indicators were designed for another era where the loudness "optimizing" tools we have at our disposal were not available. There is a chance now of balancing weapon and countermeasure, before digital broadcast gets synonymous with listening fatigue, just like many pop CDs are today. A panel of specialists drawn from the fields of broadcast, analog electronics design, psychoacoustics, digital signal processing, hearing health, and audio mastering will each examine the topic from their own perspectives. The panel will endeavor to answer: What is loudness, how is it measured? How is it different from Sound Pressure Level? What are its effects on perception, the broadcast chain, and perceptual coding? How does loudness achieved through drastic dynamic range restriction affect the health of the recording and broadcast industries?  The session will consist of discussion, demonstrations, and Q&A.


Saturday, October 8, 4:30 pm — 6:30 pm

T12 - Audio Compression

Presenter:
John Strawn, S Systems Inc. - Larkspur, CA, USA

Abstract:
Audio compression involves removing certain parts of the signal, with the goal of reducing the data rate without impacting the audio quality much if at all. In this tutorial, which will include sound examples, we start with an overview of the perceptual bag of tricks in perceptually-based codecs such as MP3 and AC-3. Perceptual insights are often combined with mathematical innovations, such as the discrete cosine transform. For R&D engineers, there will be information about methods for implementing the DCT. We will show how the building blocks can be assembled into the basic structure of a perceptual encoder and decoder. As time allows, we will review the basic families of codecs, including where MP3 fits in and look at nonperceptually-based codecs. Finally, building on the theory covered here, there are tips for the recording engineer making an MP3 to minimize undesired artifacts.


Sunday, October 9, 9:00 am — 11:00 am

T13 - Project Studio Design: Layout, Acoustics, Sound System, Tuning – Part 1

Presenter:
Anthony Grimani, Performance Media Industries (PMI), Ltd. - Fairfax, CA, USA

Abstract:
Statistics tell us that fully 93% of all A titles go through a “project” studio in some phase of their production cycle. In fact, there are an estimated 350,000 project studios worldwide. Whether to cut costs, be more convenient, or more convivial, the project studio trend is growing, and gear is available at astoundingly low prices to all those interested in setting up their own project room. The real challenge today is how to maintain quality and consistency in the transition from “professional” studio to project studio and back. Ultimately, project studios need to be set up following some fundamental rules if the program material is to stand a chance of surviving the transition between facilities. These two tutorials will focus on guidelines for proper set-up of a project studio: room layout, acoustics, sound system design, and calibration. The information will cover all the details that affect the system design, from room to equipment, and will provide simple recipes for improving quality and consistency. Basic knowledge of audio engineering theory, acoustics, and electro-acoustics are recommended.

This is the first tutorial of a two-part series. The main topics covered in this first part are:

• Room and system layout
• Room acoustics and dimensioning
• Sound isolation
• Noise control
• Vibration and rattle control


Sunday, October 9, 9:00 am — 12:00 pm

T14 - Audio Ear Training

Presenter:
David Moulton, Moulton Laboratories - Groton, MA, USA

Abstract:
Audio ear training can be a powerful and effective component of education for audio engineers. The act of learning to accurately recognize and describe various physical audio characteristics and dimensions can enable a much clearer and higher level of understanding of audio processes, as well as their relative importance. Further, such ear training can be undertaken at any level, for a variety of purposes, by private individuals, by educational institutions and by companies concerned about the acuity and reliability of their employees’ hearing.
In this tutorial, Dave Moulton will describe his ear training techniques, developed over the past 35 years, including presentations regarding the identification of various frequency bands, relative amplitudes, time domain and other areas, including a variety of signal processing treatments.

Moulton will discuss the cognitive problems he has found that students often have, the real goals for such ear training, how students can do ear training on their own, and how such programs can be most effectively used in an academic context. He will discuss the relationship between these techniques and the musical ear training widely used in music curricula. He will also discuss what he feels are the reasonable resolution limits for the determined listener, in both blind and sighted contexts. Participants will take various drills to get a sense of what is involved as well as the various levels of difficulty that are normally encountered.


Sunday, October 9, 11:00 am — 1:00 pm

T15 - Distance and Depth Perception

Presenters:
Durand Begault, NASA Ames Research Center - Moffett Field, CA, USA
William Martens, McGill University - Montreal, Quebec, Canada

Abstract:
Much discussion in the audio engineering literature has focused on the spatial location of sounds in terms of azimuth, and less often, elevation. Far less investigation has focused on auditory distance perception and how engineers consciously or unconsciously manipulate the relative distance of sound sources.
How is it possible for an engineer to systematically manipulate the distance of the perceived image of a sound source? How is the sound "stage" and relative distances established on both live and studio recordings? What are the fundamental cues for distance and depth perception? This tutorial seeks to elucidate these issues. Sound examples will be provided.


Sunday, October 9, 12:00 pm — 2:00 pm

T16 - Fundamental Knowledge about Microphones

Presenter:
Jörg Wuttke, SCHOEPS GmbH - Karlsruhe, Germany

Abstract:
Even a professional with many years of experience might enjoy reviewing the basics of acoustics and the operating principles of microphones. This tutorial also includes a discussion of technical specifications and numerous practical issues.

- Introduction: Vintage technology and the future; physics and emotion; choosing a microphone for a specific application

- Basic acoustics: Sound waves; frequency and wavelength; pressure and velocity; reflection and diffraction; comb filter effects; direct and diffuse sound

- Basic evaluations: Loudness and SPL; decibels; listening tests; frequency/amplitude and frequency/phase response; frequency domain and time domain

- How microphones work: Pressure and pressure-gradient transducers; directional patterns; some special types (boundary layer microphones and shotguns)

- Microphone specifications: Frequency responses (plural!); polar diagrams; free-field vs. diffuse-field response; low- and high-frequency limits; equivalent noise, maximum SPL and dynamic range

- Practical issues: Source and load impedance; powering; wind and breath noise


Sunday, October 9, 12:00 pm — 2:00 pm

T17 - Post Production of Sacred Love Live - Nathaniel Kunkel

Abstract:
The latest Sting DVD, Sacred Love Live, was recorded this year in Frankfort, Germany. In this tutorial Emmy Award winning mixer Nathaniel Kunkel will discuss the audio postproduction of this DVD. Kunkel was responsible for all aspects of the audio postproduction for this project including the mixing of the main program and all bonus features. This discussion will provide insights to the surround mixing, mastering, and encoding the audio for DVD authoring. There will be special focus on the methodology used for obtaining artist approvals and stereo compatibility issues.


Sunday, October 9, 2:00 pm — 4:00 pm

T18 - Psychophysics and Physiology of Hearing

Presenter:
Poppy Crum, Johns Hopkins University School of Medicine - Baltimore, MD, USA

Abstract:
This tutorial presents psychoacoustical phenomena from a physiological perspective. What we hear for any given acoustic signal is often not easily predicted without a consideration of nonlinear processing occurring in the ear and brain. Psychoacoustical studies demonstrate this relationship and offer mapping functions that enable better prediction from the acoustic source to the perceptual experience. In this tutorial we will discuss many such phenomena as they occur in natural hearing and offer an understanding of the physiology that leads to a particular perceptual outcome. Initial emphasis will be on how the ear (outer, middle, and inner) processes a simple sound – with a focus on the physiology of the inner ear. From here we will consider psychoacoustic phenomena associated with the perceptual experiences of: loudness, masking, pitch, and spatial localization. As appropriate we will discuss the physiology of higher auditory brain areas (beyond the cochlea) and the relative processing necessary for a given phenomenon. For example, many of the neural correlates of spatial hearing are well understood. We will discuss various properties of spatial hearing and attempt to understand how an acoustic signal in a free-field environment is encoded and represented in the nervous system ultimately leading to the perceived location. In other words, where, and how, is a spatial signal interpreted and coded in the brain? And how does this representation influence our perception of the source’s location?


Sunday, October 9, 4:30 pm — 6:30 pm

T19 - Project Studio Design: Layout, Acoustics, Sound System, Tuning—Part 2

Presenter:
Anthony Grimani, Performance Media Industries (PMI), Ltd. - Fairfax, CA, USA

Abstract:
Statistics tell us that fully 93 percent of all A titles go through a “project” studio in some phase of their production cycle. In fact, there are an estimated 350,000 project studios worldwide. Whether to cut costs, be more convenient, or more convivial, the project studio trend is growing, and gear is available at astoundingly low prices to all those interested in setting up their own project room. The real challenge today is how to maintain quality and consistency in the transition from “professional” studio to project studio and back. Ultimately, project studios need to be set up following some fundamental rules if the program material is to stand a chance of surviving the transition between facilities. These two tutorials will focus on guidelines for proper set-up of a project studio: room layout, acoustics, sound system design, and calibration. The information will cover all the details that affect the system design, from room to equipment, and will provide simple recipes for improving quality and consistency. Basic knowledge of audio engineering theory, acoustics, and electroacoustics are recommended.

This is the second tutorial of a two-part series. The main topics covered in the second part are:

• Acoustical treatments for optimized reflections, echoes, and energy decay
• Sound system selection
• Sound system placement optimization
• System calibration


Sunday, October 9, 4:30 pm — 6:00 pm

T20 - Creating Audio for Next Generation Game Consoles—What You Need to Know

Presenter:
Brian Schmidt, Microsoft Corp. - Redmond, WA, USA

Abstract:
The next generation game consoles offer a completely new set of hardware. Previous limitations in assigned memory, processing power, and audio tools restricted mixers and sound designers in the creation and delivery of audio assets. This new age will allow more freedom to create completely immersive gameplay. This tutorial will discuss the advanced capabilities and features available in the next generation consoles and instruct game mixers and designers in their use.


Monday, October 10, 9:00 am — 12:00 pm

T21 - Digital Filters and Filter Design—A Tutorial

Presenters:
James Johnston, Microsoft Corporation - Redmond, WA, USA
Bob Adams
Jayant Datta

Abstract:
In this tutorial we will first present the basic form and function of digital filters, explain the link between impulse response and frequency response, and how it leads to different forms of filtering, and relate that to the similarities and differences between the designs of analog filters and digital filters. Then, we will talk about some presently available filter design tools, how to use them, and how to relate the filters produced to the actual needs of the filter designer. Finally, we will say something about bit depth for both data and coefficients and mention to some extent the sensitivities of different kinds of digital filter.


Monday, October 10, 9:00 am — 10:30 am

T22 - Mixing Surround for Broadcast and Theatrical Release

Presenter:
Dominick Tavella, Sound One - NYC, NY, USA

Abstract:
Surround mixing was once for theatrical release only. Today surround sound is an intrinsic part of the broadcast world. This tutorial will examine the different approaches necessary for effective surround production for the large and small screen. Dominck Tavella will present a variety of surround mixing examples illustrating the range of approaches used for his Oscar® winning mix for "Chicago" to documentaries for director Ken Burns.


Monday, October 10, 11:00 am — 1:30 pm

T23 - International Surround Mixes: A Listening Experience

Presenter:
Mick Sawaguchi

Abstract:
Mick Sawaguchi will host a session of rare surround recordings from around the world on a high-quality playback system. Mick’s selections will include pop, contemporary, jazz, electronica, classical, club, world music, and soundscapes from his international collection. He will discuss differing recording philosophies and techniques and his approach to making sense of the surround world. Enjoy an international surround listening experience.


Monday, October 10, 12:30 pm — 2:00 pm

T24 - Assembly Language Programming: Street Smarts from OOP

Presenter:
John Strawn, S Systems Inc. - Larkspur, CA, USA

Abstract:
This tutorial will cover the craft of writing in assembler, typically for DSP chips and embedded processors. Based on my experience of the last 20 years, I will demonstrate how I apply lessons from object-oriented programming (OOP) to assembly language, to make code easier to develop, debug, maintain, and reuse. Recently this approach saved me when finishing a 30,000-line program in assembler. Even if the syntax of the assembly language does not support OOP directly, I approach the design using OOP principles, and I structure the code following OOP practices, without changing the target assembler syntax. The discussion will review OOP, discuss how to partition a real-world device (maybe an iPOD) based on OOP, and show (non-proprietary) code examples based on several processors. This tutorial seminar is especially intended for students just learning assembly language programming as well as R&D engineers who are not yet seasoned assembly language programmers. Bring pencil and paper to participate in some designs during the presentation.


Monday, October 10, 12:30 pm — 2:30 pm

T25 - DSP Design Considerations in Application to Higher Performance Intelligent Digital Audio Amplifiers

Presenter:
Skip Taylor, D2Audio Corporation - Austin, TX, USA

Abstract:
There is a strong technological revolution in the world of audio amplification where traditional Class A/Class AB analog amplifiers are giving way to high-efficiency digital amplifiers with improved sound quality. This presentation discusses some of the legacy amplifier technology solutions and their application, and the key points audio design engineers need to pay attention to when transitioning to a digital amplifier system solution. Using advanced DSP technology, this new Class-D amplifier technology can lead to further performance improvement in higher power digital amplifier solutions and can provide a formidable competitor to the legacy Class AB analog solutions.

Back to AES 119th Convention Back to AES Home Page


(C) 2005, Audio Engineering Society, Inc.