AES New York 2009
Friday, October 9, 9:00 am — 10:30 am
T1 - The Growing Importance of Mastering in the Home Studio Era
Andres Mayo, Andres Mayo Mastering
Artists and producers are widely using their home studios for music production, with a better cost/benefit ratio, but they usually lack technical resources, and the acoustic response of their rooms is unknown. Therefore, there is greater need for a professional mastering service in order to achieve the so-called "standard commercial quality." This tutorial presents a list of common mistakes that can be found in homemade mixes with real-life audio examples taken directly from recent mastering sessions. What can and what can’t be fixed at the mastering stage?
Saturday, October 10, 2:30 pm — 4:30 pm
T2 - The Hearing Conservation Seminar
Benjamin Kanters, Columbia College Chicago - Chicago, IL, USA
The Hearing Conservation seminar is a new approach to promoting awareness of hearing loss and conservation. This program is specifically targeted to students and professionals in the audio and music industries. Experience has shown that this group of practitioners easily understands the concepts of hearing physiology as many of the principles and theories are the same as those governing audio and acoustics. Moreover, these people are quick to understand the importance of developing their own safe listening habits, as well as being concerned for the hearing health of their clients and the music-listening public. The seminar is a 2-hour presentation in three units: first, an introduction to hearing physiology, the second, noise-induced loss, and third, practicing effective and sensible hearing conservation.
Presented on behalf of the AES Technical Committee on Hearing and Hearing Loss Prevention.
Saturday, October 10, 5:00 pm — 6:30 pm
T3 - Percussion Acoustics and Quantitative Drum Tuning
Rob Toulson, Anglia Ruskin University - Cambridge, UK
Intricate tuning of acoustic drums can have a significant influence on the quality and contextuality of the instrument during a recording session. In this tutorial presentation, waveform and spectral analysis will be used to show that quantitative tuning and acoustic benchmarking is a viable possibility. The principle acoustic properties of popular drums will be shown by live demonstration and aspects relating to drum tuning will be highlighted. In particular, demonstration and analysis of the following tuning issues will be covered: Achieving a uniform pitch across the drum head; Tuning drums to a desired musical pitch; Manipulating overtones and generating rich timbres; Controlling attack and decay profiles; Tuning the drum kit as a pitched musical instrument.
Sunday, October 11, 5:00 pm — 6:30 pm
T4 - Electroacoustic Measurements of Headphones and Earphones
Christopher J. Struck, CJS Labs - San Francisco, CA, USA
This tutorial reviews basic electroacoustic measurement concepts: gain, sensitivity, sound fields, signals, linear, and non-linear systems. The Insertion Gain concept is introduced. The orthotelephonic response is described as a target for both the free and diffuse fields. Equivalent volume and acoustic impedance are defined. Ear simulators and test manikins appropriate for Circum- Supra- and Intra-aural earphones are presented. The salient portions of the IEC 60268-4 standard are reviewed and examples are given of the basic measurements: Frequency Response, Distortion, Impedance. A brief introduction to Noise Canceling devices is also presented.
Sunday, October 11, 6:00 pm — 7:00 pm
T5 - Audio Preservation at the National Audio-Visual Conservation Center (NAVCC)
This tutorial will discuss audio preservation at the Library of Congress’ National Audio-Visual Conservation Center (NAVCC), which was recently completed in Culpeper, VA. It will also give an overview of the NAVCC, a state-of-the-art facility for storing and preserving recorded sound, video, and film materials.
Monday, October 12, 1:30 pm — 3:30 pm
T6 - Parametric Digital Reverberation
Jean-Marc Jot, DTS Inc. - Scotts Valley, CA, USA
This tutorial is intended for audio algorithm designers and students interested in the design and applications of digital reverberation algorithms. Artificial reverberation has long been an essential tool in the studio and is a critical component of modern desktop audio workstations and interactive audio rendering systems for gaming, virtual reality, and telepresence. Feedback delay networks yield computationally efficient tunable reverberators that can reproduce natural room reverberation decays with arbitrary fidelity, configurable for various spatial audio encoding and reproduction systems and formats.
This session will include a review of early digital reverberation developments and of the physical, perceptual, and signal properties of room reverberation; a general and rational method for designing digital reverberators, and a discussion of reverberation network topologies; flexible and accurate parametric tuning of the reverberation decay time; analysis and simulation of existing reverberation decays; the Energy Decay Relief of a reverberation response and its interpretation; practical applications in audio production and game audio rendering, illustrated by live demonstrations.
Monday, October 12, 4:00 pm — 5:00 pm
T7 - The Manifold Joys, Uses, and Misuses of Polynomial Mapping Functions in Signal Processing
Robert Bristow-Johnson, audioImagination - Burlington, VT, USA
In digital audio signal processing, we “do math” upon samples of audio signals, but also with parameters associated with audio signals. These parameters can perhaps be loosely divided into two groups: numbers that are important and recognizable to the user of the DSP algorithm (loudness in dB, frequency in Hz, pitch or bandwidth in octaves, RT60 in seconds), and numbers that are of direct concern to the DSP algorithm itself (coefficients, thresholds, displacements in samples). There are relationships between these groups of parameters that involve mysteriously designed transcendental functions that may not be available to the processor being used in the implementation. This is about how to implement such mappings without the use of the ubiquitous look-up table (LUT) by fitting polynomials to functions, where it's useful in audio DSP, a little about the different fitting criteria, and methods of fitting.