AES Show: Make the Right Connections Audio Engineering Society

AES San Francisco 2008
Tutorial Details

Thursday, October 2, 9:00 am — 10:45 am

T1 - Electroacoustic Measurements


Presenter:
Christopher J. Struck, CJS Labs - San Francisco, CA, USA

Abstract:
This tutorial focuses on applications of electroacoustic measurement methods, instrumentation, and data interpretation as well as practical information on how to perform appropriate tests. Linear system analysis and alternative measurement methods are examined. The topic of simulated free field measurements is treated in detail. Nonlinearity and distortion measurements and causes are described. Last, a number of advanced tests are introduced.

This tutorial is intended to enable the participants to perform accurate audio and electroacoustic tests and provide them with the necessary tools to understand and correctly interpret the results.


Thursday, October 2, 11:00 am — 1:00 pm

T2 - Standards-Based Audio Networks Using IEEE 802.1 AVB


Presenters:
Robert Boatright, Harman International - CA, USA
Matthew Xavier Mora, Apple - Cupertino, CA, USA
Michael Johas Teener, Broadcom Corp. - Irvine CA, USA

Abstract:
Recent work by IEEE 802 working groups will allow vendors to build a standards-based network with the appropriate quality of service for high quality audio performance and production. This new set of standards, developed by the IEEE 802.1 Audio Video Bridging Task Group, provides three major enhancements for 802-based networks:

1. Precise timing to support low-jitter media clocks and accurate synchronization of multiple streams,
2. A simple reservation protocol that allows an endpoint device to notify the various network elements in a path so that they can reserve the resources necessary to support a particular stream, and
3. Queuing and forwarding rules that ensure that such a stream will pass through the network within the delay specified by the reservation.

These enhancements require no changes to the Ethernet lower layers and are compatible with all the other functions of a standard Ethernet switch (a device that follows the IEEE 802.1Q bridge specification). As a result, all of the rest of the Ethernet ecosystem is available to developers—in particular, the various high speed physical layers (up to 10 gigabit/sec in current standards, even higher speeds are in development), security features (encryption and authorization), and advanced management (remote testing and configuration) features can be used. This tutorial will outline the basic protocols and capabilities of AVB networks, describe how such a network can be used, and provide some simple demonstrations of network operation (including a live comparison with a legacy Ethernet network).


Thursday, October 2, 2:30 pm — 4:30 pm

T3 - Broadband Noise Reduction: Theory and Applications


Presenters:
Alexey Lukin, iZotope, Inc. - Boston, MA, USA
Jeremy Todd, iZotope, Inc. - Boston, MA, USA

Abstract:
Broadband noise reduction (BNR) is a common technique for attenuating background noise in audio recordings. Implementations of BNR have steadily improved over the past several decades, but the majority of them share the same basic principles. This tutorial discusses various techniques used in the signal processing theory behind BNR. This will include earlier methods of implementation such as broadband and multiband gates and compander-based systems for tape recording. In addition to explanation of the early methods used in the initial implementation of BNR, greater emphasis and discussion will be focused toward recent advances in more modern techniques such as spectral subtraction. These include multi-resolution processing, psychoacoustic models, and the separation of noise into tonal and broadband parts. We will compare examples of each technique for their effectiveness on several types of audio recordings.


Friday, October 3, 9:00 am — 11:30 am

T4 - Perceptual Audio Evaluation


Presenters:
Søren Bech, Bang & Olufsen A/S - Struer, Denmark
Nick Zacharov, SenseLab - Delta, Denmark

Abstract:
The aim of this tutorial is to provide an overview of perceptual evaluation of audio through listening tests, based on good practices in the audio and affiliated industries. The tutorial is aimed at anyone interested in the evaluation of audio quality and will provide a highly condensed overview of all aspects of performing listening tests in a robust manner. Topics will include: (1) definition of a suitable research question and associated hypothesis, (2) definition of the question to be answered by the subject, (3) scaling of the subjective response, (4) control of experimental variables such as choice of signal, reproduction system, listening room, and selection of test subjects, (5) statistical planning of the experiments, and (6) statistical analysis of the subjective responses. The tutorial will include both theory and practical examples including discussion of the recommendations of relevant international standards (IEC, ITU, ISO). The presentation will be made available to attendees and an extended version will be available in the form of the text “Perceptual Audio Evaluation" authored by Søren Bech and Nick Zacharov.


Friday, October 3, 11:30 am — 12:30 pm

T5 - A Postproduction Model for Video Game Audio: Sound Design, Mixing, and Studio Techniques


Presenter:
Rob Bridgett, Radical Entertainment - Vancouver, BC, Canada

Abstract:
In video game development, audio postproduction is still a concept that is frowned upon and frequently misunderstood. Audio content often still has the same cut-off deadlines as visual and design content, allowing no time to polish the audio or to reconsider the sound in context of the finished visuals. This tutorial talks about ways in which video game audio can learn from the models of postproduction sound in cinema, allotting a specific time at the end of a project for postproduction sound design, and perhaps more importantly, mixing and balancing all the elements of the soundtrack before the game is shipped. This tutorial will draw upon examples and experience of postproduction audio work we have done over the last two years such as mixing the Scarface game at Skywalker Sound and also more recent titles such as Prototype. The tutorial will investigate:

•Why cutting off sound at the same time as design and art doesn't work
•Planning and preparing for postproduction audio on a game
•Real-time sound replacement and mixing technology (proprietary and middleware solutions)
•Interactive mixing strategies (my game is 40+ hours long, how do I mix it all?)
•Building/equipping a studio for postproduction game audio.


Friday, October 3, 2:30 pm — 3:30 pm

T6 - Modern Perspectives on Hewlett's Sine Wave Oscillator


Presenter:
Jim Williams, Linear Technology - Milpitas, CA, USA

Abstract:
This tutorial describes the thesis and related work of a Stanford University graduate student, William R. Hewlett. Hewlett’s 1939 thesis, concerning a then-new type of sine wave oscillator, is reviewed. His use of new concepts and ideas of Nyquist, Black, and Meacham is considered. Hewlett displays an uncanny knack for combining ideas to synthesize his desired result. The oscillator is a beautiful example of lateral thinking. The whole problem was considered in an interdisciplinary spirit, not just an electronic one. This is the signature of superior problem solving and good engineering. Although the theoretics and technology are now passe, the quality of Hewlett's thinking remains rare, and singularly human. No computer driven “expert system” could ever emulate such lateral thinking, advertising copy notwithstanding. Modern adaptations of Hewlett’s guidance complete the tutorial. Handouts include Hewlett’s thesis, a detailed production schematic of the oscillator, and modern versions of the circuit.


Friday, October 3, 4:30 pm — 6:00 pm

T7 - Sound in the UI


Presenter:
Jeff Essex, Audiosyncrasy - Albany, CA, USA

Abstract:
Many computer and consumer electronics products use sound as part of their UI, both to communicate actions and to create a "personality" for their brand. This session will present numerous real-world examples of sounds created for music players, set-top boxes and operating systems. We'll follow projects from design to implementation with special attention to solving real-world problems that arise during development. We'll also discuss some philosophies of sound design, showing examples of how people respond to various audio cues and how those reactions can be used to convey information about the status of a device (navigation through menus, etc.).


Friday, October 3, 5:30 pm — 6:45 pm

T8 - Free Source Code for Processing AES Audio Data


Presenters:
Gregg C. Hawkes, Xilinx - San Jose, CA, USA
Reed Tidwell, Xilinx - San Jose, CA, USA

Abstract:
This session is a tutorial on the Xilinx free Verilog and VHDL source code for extracting and inserting audio in SDI streams, including “on the fly” error correction and high performance, continuously adaptive, asynchronous sample rate conversion. The audio sample rate conversion supports large ratios as well as fractional conversion rates and maintains high performance while continuously adapting itself to the input and output rates without user control. The features, device utilization, and performance of the IP will be presented and demonstrated with industry standard audio hardware.


Saturday, October 4, 9:00 am — 10:45 am

T9 - How I Does Filters: An Uneducated Person’s Way to Design Highly Regarded Digital Equalizers and Filters


Presenter:
Peter Eastty, Oxford Digital Limited - Oxfordshire, UK

Abstract:
Much has been written in many learned papers about the design of audio filters and equalizers, this is NOT another one of those. The presenter is a bear of little brain and has over the years had to reduce the subject of digital filtering into bite-sized lumps containing a number of simple recipes that have got him through most of his professional life. Complete practical implementations of high pass and low pass multi-order filters, bell (or presence) filters, and shelving filters including the infrequently seen higher order types. The tutorial is designed for the complete novice, it is light on mathematics and heavy on explanation and visualization—even so, the provided code works and can be put to practical use.


Saturday, October 4, 9:00 am — 10:30 am

T10 - New Technologies for Up to 7.1 Channel Playback in Any Game Console Format


Presenter:
Geir Skaaden, Neural Audio Corp. - Kirkland, WA, USA

Abstract:
This tutorial investigates methods for increasing the number of audio channels in a gaming console beyond its current hardware limitations. The audio engine within a game is capable of creating a 360 8 environment, however, the console hardware uses only a few channels to represent this world. If home playback systems are commonly able to reproduce up to 7.1 channels, how do game developers increase the number of playback channels for a platform that is limited to 2 or 5 outputs? New encoding technologies make this possible. Descriptions of current methods will be made in addition to new console independent technologies that run within the game engine. Game content will be used to demonstrate the encode/decode process.


Saturday, October 4, 11:00 am — 12:00 pm

T11 - [Canceled]



Saturday, October 4, 2:30 pm — 4:30 pm

T12 - Damping of the Room Low-Frequency Acoustics (Passive and Active)


Presenters:
Reza Kashani, University of Dayton - Dayton, OH, USA
Jim Wischmeyer, Modular Sound Systems, Inc. - Lake Barrington, IL, USA

Abstract:
As the result of its size and geometry, a room excessively amplifies sound at certain frequencies. This is the result of standing waves (acoustic resonances/modes) of the room. These are waves whose original oscillation is continuously reinforced by their own reflections. Rooms have many resonances, but only the low-frequency ones are discrete, distinct, unaffected by the sound absorbing material in the room, and accommodate most of the acoustic energy build-up in the room.


Saturday, October 4, 4:30 pm — 6:00 pm

T13 - Improved Power Supplies for Audio Digital-Analog Conversion


Presenters:
Mark Brasfield, National Semiconductor Corporation - Santa Clara, CA, USA
Robert A. Pease, National Semiconductor Corporation - Santa Clara, CA, USA

Abstract:
It is well known that good, stable, linear, constant-impedance, wide-bandwidth power supplies are important for high-quality Digital-to-Analog Conversion. Poor supplies can add noise and jitter and other unknown uncertainties, adversely affecting the audio.


Saturday, October 4, 5:00 pm — 6:45 pm

T14 - Electric Guitar-The Science Behind the Ritual


Presenter:
Alex U. Case, University of Massachusetts - Lowell, MA, USA

Abstract:
It is an unwritten law that recording engineers’ approach the electric guitar amplifier with a Shure SM57, in close against the grille cloth, a bit off-center of the driver, and angled a little. These recording decisions serve us well, but do they really matter? What changes when you back the microphone away from the amp, move it off center of the driver, and change the angle? Alex Case, Sound Recording Technology professor to graduates and undergraduates at UMass Lowell breaks it down, with measurements and discussion of the variables that lead to punch, crunch, and other desirables in electric guitar tone.


Sunday, October 5, 11:30 am — 1:00 pm

T15 - Real-Time Embedded Audio Signal Processing


Presenter:
Paul Beckmann, DSP Concepts, LLC - Sunnyvale, CA, USA

Abstract:
Product developers implementing audio signal processing algorithms in real-time encounter a host of challenges and tradeoffs. This tutorial focuses on the high-level architectural design decisions commonly faced. We discuss memory usage, block processing, latency, interrupts, and threading in the context of modern digital signal processors with an eye toward creating maintainable and reusable code. The impact of integrating audio decoders and streaming audio to the overall design will be presented. Examples will be drawn from typical professional, consumer, and automotive audio applications.


Sunday, October 5, 11:30 am — 1:00 pm

T16 - Latest Advances in Ceramic Loudspeakers and Their Drivers


Presenters:
Mark Cherry, Maxim, Inc. - Sunnyvale, CA, USA
Robert Polleros, Maxim Integrated Products - Austria
Peter Tiller, Murata - Atlanta, GA, USA

Abstract:
New cell phone designs demand small form factor while maintaining audio sound-pressure level. Speakers have typically been the component that limits the thinness of the design. New developments in ceramic, or piezoelectric, loudspeakers have opened the door for new sleek designs. Due to the capacitive nature of these ceramic speakers, special considerations need to be taken into account when choosing an audio amplifier to drive them. Today’s portable devices need smaller, thinner, more power-efficient electronic components. Cellular phones have become so thin that the dynamic speaker is typically the limiting factor in how thin manufacturers can make their handsets. The ceramic, or piezoelectric, speaker is quickly emerging as a viable alternative to the dynamic speaker. These ceramic speakers can deliver competitive sound-pressure levels (SPL) in a thin and compact package, thus potentially replacing traditional voice-coil dynamic speakers.


Sunday, October 5, 2:30 pm — 4:30 pm

T17 - An Introduction to Digital Pulse Width Modulation for Audio Amplification


Presenter:
Pallab Midya, Freescale Semiconductor Inc. - Austin, TX, USA

Abstract:
Digital PWM is highly suitable for audio amplification. Digital audio sources can be readily converted to digital PWM using digital signal processing. The mathematical nonlinearity associated with PWM can be corrected with extremely high accuracy. Natural sampling and other techniques will be discussed that convert a PCM signal to a digital PWM signal. Due to limitations of digital clock speeds and jitter, the duty ratio of the PWM signal has to be quantized to a small number of bits. The noise due to quantization can be effectively shaped to fall outside the audio band. PWM specific noise shaping techniques will be explained in detail. Further, there is a need for sample rate conversion for a digital PWM modulator to work with a digital PCM signal that is generated using a different clock. The mathematics of an asynchronous sample rate converters will also be discussed. Digital PWM signals are amplified by a power stage that introduces nonlinearity and mixes in noise from the power supply. This mechanism will be examined and ways to correct for it will be discussed.


Sunday, October 5, 2:30 pm — 4:30 pm

T18 - FPGA for Broadcast Audio


Presenters:
John Lancken, Fairlight Pty LTD - Frenchs Forest, Australia
Girish Malipeddi, Altera Corporation - San Jose, CA, USA

Abstract:
This tutorial presents broadcast-quality solutions based on FPGA technology for audio processing with significant cost savings over existing discrete solutions. The solutions include digital audio interfaces such as AES3/SPDIF and I2S, audio processing functions such as sample rate converters and SDI audio embed/de-embed functions. Along with these solutions, an audio video framework that consists of a suite of A/V functions, reference designs, an open interface to easily stitch the AV blocks, system design methodology, and development kits is introduced. Using the framework system designers can quickly prototype and rapidly develop complex audio video systems.


Sunday, October 5, 4:30 pm — 6:30 pm

T19 - Point-Counterpoint—Fixed vs. Floating-Point DSPs


Presenters:
Robert Bristow-Johnson, Audio Imagination
Jayant Datta, THX - Syracuse, NY, USA
Boris Lerner, Analog Devices - Norwood, MA, USA
Matthew Watson, Texas Instruments, Inc. - Dallas, TX, USA

Abstract:
There is a lot of controversy and interest in the signal processing community concerning the use of fixed and floating-point DSPs. There are various trade-offs between these two approaches. The audience will walk away with an appreciation of these two approaches and an understanding of the strengths of weaknesses of each. Further, this tutorial will focus on audio-specific signal processing applications to show when a fixed-point DSP is applicable and when a floating-point DSP is suitable.


Sunday, October 5, 5:00 pm — 6:45 pm

T20 - Radio Frequency Interference and Audio Systems


Presenter:
Jim Brown, Audio Systems Group, Inc.

Abstract:
This tutorial begins by identifying and discussing the fundamental mechanisms that couple RF into audio systems and allow it to be detected. Attention is then given to design techniques for both equipment and systems that avoid these problems and methods of fixing problems with existing equipment and systems that have been poorly designed or built.