AES Conventions and Conferences

   Return to 121st
   Housing Information
   Technical Program
   Detailed Calendar
   4 Day Planner
   Paper Sessions
   Broadcast Events
   Special Events
   Master Classes
   Live Sound Seminars
   Technical Tours
   Student / Career
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Exhibitor Seminars
   Training Sessions
   Press Information
   Exhibitor Info
   Author Information
   SFO Exhibition

AES San Francisco 2006
Master Class Session Details

Thursday, October 5, 9:00 am — 12:00 pm


Floyd Toole, Harman International Industries, Inc. - Northridge, CA, USA

Traditional acoustical design methods evolved in large performance spaces such as concert halls. They rely on assumptions that become progressively less valid as rooms get smaller and more acoustically absorptive. In sound reproduction, one cannot consider the loudspeakers and the room independently; they function as a system, differently below and above a transition region around 300 Hz. Above this transition we need to understand our reactions to reflected sounds; below it the modal behavior of the space is the dominant factor. What could be a very difficult situation is greatly alleviated by the ability of humans to adapt to the complexities of reflective rooms, including the abilities to correctly localize sounds in direction and distance and to hear much of the true timbral nature of sound sources. More research is needed before we completely understand the perceptual consequences of acoustical cues in multichannel reproduction as distinct from those contributed by the room. Evidence thus far suggests that, above the transition frequency, the room is a relatively benign and, in some ways, a beneficial factor. There are, however, implications about the acoustical performance of materials in the propagation paths. At low frequencies, room resonances are a major concern, but new techniques allow us to achieve similar and good bass at several listening locations.

Saturday, October 7, 8:30 am — 11:30 am


Henry W. Ott, Henry Ott Consultants

Controlling noise on a mixed-signal PCB can be a difficult problem. This is especially true on boards with multiple ADCs. Some designers suggest splitting the ground plane in order to isolate the digital ground from the analog ground. Although the split-plane approach can be made to work, it has many potential problems.

This presentation clearly describes the principles involved in the layout of a mixed-signal PCB and demonstrates that component placement, partitioning, and proper board topology, combined with routing discipline, are the keys to success in laying out a quiet mixed-signal PCB.

By understanding how and where high-frequency ground currents flow, we are able to develop an approach to controlling noise while, in most cases, still maintaining a single contiguous ground plane.

Saturday, October 7, 2:00 pm — 4:00 pm


Bill Whitlock, Jensen Transformers, Inc. - Chatsworth, CA, USA

One goal in the design of audio equipment is to maintain a high signal-to-noise ratio. But audio equipment most often operates on utility AC power, which, even under ideal conditions, normally creates ground voltage differences, magnetic fields, and electric fields. RF energy is increasingly omnipresent, too. Balanced interfaces are capable of conveying wide dynamic range analog audio signals while giving them unrivaled immunity to interference. Realizing this full capability in real-world, mass-produced equipment is not necessarily costly but requires some understanding of several common mistakes made by equipment designers. The telephone company pioneered the widespread use of balanced lines and for 50 years virtually all audio equipment used transformers at its balanced inputs and outputs—their high noise rejection was taken for granted.

When solid-state differential amplifiers began replacing transformers, most designers failed to recognize the importance of common-mode impedances—which are solely responsible for noise rejection. Instead, most believed that “balance” meant equal and opposite signal swings—which is a myth. As a result, most modern audio equipment has poor noise rejection when operating in real-world systems, even though it may have impressive rejection in a laboratory test. The IEC recognized this dichotomy when they revised their CMRR test standards in 2000 (at the urging of this author). A new IC uses bootstrap techniques to raise its common-mode impedances, and real-world noise rejection, to levels comparable to the finest transformers.

The three basic types of balanced output circuits, each with a peculiar set of tradeoffs, must be accommodated by balanced input circuits. Further, certain cable constructions and shield connections can degrade noise rejection of an otherwise perfect interface. A very common equipment design error, the “pin 1 problem,” causes shield connections to behave as a low-impedance audio inputs, allowing power-line noise and RF interference to enter the signal path.

Saturday, October 7, 4:30 pm — 6:30 pm


Julius O. Smith III, Stanford University - Stanford, CA, USA

This presentation will review methods for efficient real-time digital sound synthesis based on the physics of musical instruments. Design goals include faithful preservation of both expressive parametric control and synthetic sound quality. Proceeding roughly in historical order, we begin with the "Bicycle Built for Two" sound example of the singing voice by John Kelly, Carol Lochbaum, and Max Mathews (1961). This demo inspired Arthur C. Clark to have the HAL9000 computer in the movie 2001: A Space Odyssey' sing this, its "first song," as it was being disassembled by astronaut Dave Bownam. Following will be a series of sound examples and overviews of later synthesis models similarly based on acoustic signal processing principles. Examples include virtual stringed instruments, wind instruments, singing voice, and "virtual analog” synthesis. Along the way, a sampling of related recent research will be summarized.

Sunday, October 8, 9:00 am — 12:00 pm


Robert A. Pease, National Semiconductor Corp. - Santa Clara, CA, USA

Despite attempts by the folks in the digital domain to banish analog audio to the dusty corners of museums, analog remains an important component in all audio devices and systems. Analog circuitry and signals are found in power supplies, digital-audio interfaces, clock circuits, Class D Amplifiers, and other "digital places." GOOD audio will always require GOOD analog design skills. SUPERIOR audio requires specialized skills and thoughtful layouts, as well as premium components and innovative circuits and signal routing. This Master Class will give attendees a unique perspective on analog design for audio equipment. Special insights into capacitors, resistors, wires, op-amps, and thermal problems will be highlighted.

Sunday, October 8, 2:00 pm — 4:00 pm


Roger Nichols

This Master Class explores the science and art of mixing stereophonic music. Emphasis is on pop, jazz, and rock music with examples from Steely Dan, Tower of Power, Al DiMeola, and other sessions. The science of the mix - creative use of the frequency spectrum, maintaing proper phase relationships, panning philosophy, application of appropriate levels of signal processing, and other phenomenon are examined. Can you really fix it
in the mix? How much does the recording affect the mix? Tricks of the trade perfected by Roger Nichols during several decades of award-winning recordings. Roger will also discuss Mix Abuse; abusive use of level control and compression devices. Come learn the art of mixing from one of the masters.

Back to AES 121st Convention Back to AES Home Page

(C) 2006, Audio Engineering Society, Inc.