120th AES Convention - Paris, France - Dates: Saturday May 20 - Tuesday May 23, 2006 - Porte de Versailles

Registration
Exhibition
Calendar
4 Day Planner
Paper Sessions
Workshops
Tutorials
Exhibitor Seminars
Application Seminars
Student Program
Special Events
Technical Tours
Heyser Lecture
Tech Comm Mtgs
Standards Mtgs
Hotel Reservation
Presenters

AES Paris 2006


Home | Technical Program | Exhibition | Visitors | Students | Press
 

AES Paris 2006
Tutorial Session Details


Saturday, May 20, 09:30 — 11:30

T1 - MASTERING FOR MULTICHANNEL AUDIO

Presenters:
Jeff Levison, DTS, Inc. - Agora Hills, California
Darcy Proper, Galaxy Studios - Mol, Belgium
Marko Schneider, Skywalk Mastering - Trierweiler, Germany

Abstract:
In stereo music production mastering has become an exacting science with specific goals, techniques, and tools. Surround systems are becoming the standard in consumer applications. So how has the stereo mastering experience been adapted to multichannel? This tutorial will examine the multichannel work of the mastering engineer in music and the repurposing of motion picture audio for DVD. A discussion of practical approaches and work procedures (with real world examples) will illustrate issues such as dynamics control, stereo downmix, bass management, bit rate, and audio encoding for authoring. Goals, differences, and similarities of mastering for music and film will be compared. Special emphasis will be placed on the new challenges for broadcast, Blu-Ray, and HD-DVD.


Saturday, May 20, 14:00 — 16:00

T2 - COMPUTATIONAL AUDITORY SCENE ANALYSIS

Presenter:
Dan Ellis, Columbia University - USA

Abstract:
Despite recent advances in blind source separation, the ability of human listeners to make sense of complex sound mixtures and perceive the qualities of individual sources remains a high goal. This tutorial will review the research area known as Computational Auditory Scene Analysis (CASA), which is concerned with attempting to duplicate the source organization performed in listeners on a computer. The tutorial will include relevant perceptual results and compare CASA with other techniques, such as Independent Component Analysis.


Saturday, May 20, 14:00 — 16:00

T3 - LOUDSPEAKERS AND ROOMS FOR SOUND REPRODUCTION

Presenter:
Floyd Toole, Harman Inc. - USA

Abstract:
The physical measures by which acousticians evaluate the performance of rooms have evolved in large performance spaces such as concert halls. They rely on assumptions that become progressively less valid as spaces get smaller and more acoustically absorptive. In listening rooms the loudspeakers and the rooms interact differently below and above a transition region around 300 Hz. Above this transition we need to understand our reactions to reflected sounds; below it the modal behavior of the space is the dominant factor. A provocative observation in this tutorial has to do with human adaptation to the complexities of reflective rooms and the extent to which it allows us to correctly localize sounds in direction and distance and to hear much of the true timbral nature of sound sources. Although the interactions of loudspeakers and listeners in small rooms are becoming clearer, there are still gaps in our understanding.


Saturday, May 20, 16:30 — 18:30

T4 - LOW BIT-RATE AUDIO CODING

Presenter:
John Strawn, S Systems Inc. - Larkspur, CA, USA

Abstract:
Audio compression involves removing certain parts of the signal, with the goal of reducing the data rate without impacting the audio quality much if at all. In this tutorial, which will include sound examples, we start with an overview of the perceptual bag of tricks in perceptually-based codecs such as MP3 and AC-3. Perceptual insights are often combined with mathematical innovations, such as the discrete cosine transform. For R&D engineers, there will be information about methods for implementing the DCT. We will show how the building blocks can be assembled into the basic structure of a perceptual encoder and decoder. As time allows, we will review the basic families of codecs, including where MP3 fits in and look at nonperceptually-based codecs. Finally, building on the theory covered here, there are tips for the recording engineer making an MP3 to minimize undesired artifacts.


Saturday, May 20, 16:30 — 18:00

T5 - HD VIDEO WITH HI-RES AUDIO

Presenter:
Ronald Prent

Abstract:
This tutorial details the making of Galaxy studio's special live-recording of Omar Hakim and his band in HD-Video and 48-track audio in 24/192. It was then mixed on an API-Vision console and recorded again in 24/192. There will be ample time for questions following the documentary.


Saturday, May 20, 17:30 — 18:30

T6 - THE ORIGINS OF DIGITAL AUDIO AND COMPUTERS: CRACKING ENGIMA—WWII CIPHER MACHINES AND THE ULTRA SECRET

Presenter:
Jon Paul

Abstract:
This tutorial is a historical perspective on codes, cipher machines, Enigma, and the foundations of digital audio and computing.

Compression, DSP, and modern computing evolved from developments in WWII such as speech scrambling and breaking the Enigma code. The cracking of Enigma advanced Allied victory by a year and led the way to much of today's digital technology. We will highlight various cipher machines and their solutions to illustrate a time line that points directly toward modern digital audio.


Sunday, May 21, 08:30 — 10:30

T7 - HEARING LOSS—CAUSES, PREVENTIVE MEASURES AND EFFECTS ON SOUND PROFESSIONALS AND THE AUDIENCE

Presenters:
Jan Voetmann, DELTA - Hoersholm, Denmark
Dorte Hammershoi, Aalborg University - Aalborg, Denmark
Kim Kahari, The National Institute for Working Life - Gothenburg, Sweden
Anne-Mette Mohr, The Interdisciplinay Healthclinic - Copenhagen, Denmark

Abstract:
Hearing Loss seems to be of growing concern in the event industry. Powerful low distortion sound systems and personal portable sound devices make it easy to inflict a noise dose on people big enough to render a temporary or permanent hearing loss and/or tinnitus. Such a hearing loss is a serious handicap, which cannot be cured. This tutorial will explain some of the basics about hearing; difficulties in getting adequate measures of "music noise"; tinnitus and how to cope with it; and experiments with new discotheques designed with good acoustics in order to reduce the risk of getting hearing loss.


Sunday, May 21, 11:00 — 13:00

T8 - LATENCY IN AUDIO: FUNDAMENTALS

Presenter:
Kevin Gross, Cirrus Logic - USA

Abstract:
Latency or group delay is a measure of time delay experienced by a signal passing through a system. Latency is increasingly recognized as an important performance metric for digital audio systems. Audio professionals accustomed to analog equipment may not be wholly familiar with latency's characteristics and effect on system performance.

What causes latency? What does latency sound like? When is latency an important consideration? How much latency is too much? Is lower latency always better? What can be done to mitigate latency?

This tutorial will cover the fundamentals of latency; will draw on literature and experience of live-sound engineers, and use live demonstrations to address these questions.


Sunday, May 21, 11:00 — 13:00

T9 - PRACTICALITIES OF THE HOME STUDIO

Presenter:
Duncan Williams, University of Westminster - UK

Abstract:
This is an illustrated tutorial on the design and operation of the home project studio. It considers what a person investing in a home studio needs to know and what a typical installation is likely to involve. The implications of selecting appropriate software and hardware for a project studio will be discussed, including latency, bit depth, cross compatibility, and their subsequent impact on the quality debate.

The signal and workflow of an example project will be explored, from inception to completion, including the implementation and scope of a home mastering chain. Is it really possible to produce release-quality recordings at home?

This session will conclude with an open discussion.


Sunday, May 21, 14:00 — 16:00

T10 - AUDIO BROADCASTING TECHNOLOGIES FOR NEW MEDIA

Presenters:
Lars Jonsson, Swedish Radio
Ola Kejving, SR Sweden
Erik Lundbeck, SVT Sweden
Roland Vlaicu, Dolby Germany

Abstract:
• Distribution Techniques for Radio
• New audio coding systems along with HDTV
• Metadata transfer along with live PCM surround sound

This tutorial will include an overview of new distribution systems for audio broadcasting from implementation, frequency planning, and market perspectives. Digital Audio Broadcasting has recently adopted a second generation family of standards. These developments aim at introducing streaming video and multimedia services combined with the originally planned radio service. DAB IP data broadcasting is being aligned with other new wireless distribution systems such as DMB and DVB-H, transmitted to handheld devices. This will enable interoperability between mobile networks and widens the scope of DAB.

New audio formats for surround sound along with the introduction of HD-TV over the air and in the new HD-DVD and Blue-Ray disc formats will be presented. Improved metadata solutions for loudness control will be covered, as well as transfer of content metadata through the broadcasting chain from production to the consumer.


Sunday, May 21, 16:30 — 18:30

T11 - DESCRIPTIVE ANALYSIS I: CONSENSUS VOCABULARY DEVELOPMENT

Presenters:
Nick Zacharov, Nokia Corporation - Finland
Søren Bech, Bang and Olufsen a/s - Denmark
Agnés Giboreau, Adriant - France
Gaëtan Lorho, Nokia Corporation - Finland
Geoff Martin, Bang and Olufsen a/s - Denmark

Abstract:
Descriptive analysis (DA) considers a collection of techniques that can be used for evaluating the detailed perceptual characteristics of products or systems through listening tests. The application of DA techniques to audio has been evolving over the last few years.

This tutorial workshop aims to provide clear guidance to the researcher and experimenter regarding the nature descriptive analysis (DA) and its practical application in audio. The focus will be placed on so-called consensus language methods. Guidance on the meaning of attributes and their development will be studied using consensus vocabulary DA techniques. A review of standards pertaining to such methods will be presented and with several practical example applications in audio.


Sunday, May 21, 16:30 — 18:30

T12 - LOUDSPEAKER AND HEADPHONE FUNDAMENTALS

Presenters:
Juha Backman
Carl Poldy

Abstract:
Loudspeaker and transduction fundamentals (Juha Backman)
- Transduction and radiation fundamentals
- Structure of dynamic drivers
- From driver to loudspeaker system
- Whys and whats of enclosures
- Loudspeakers in different scales
- Piezoelectric and electrostatic loudspeakers

Headphone fundamentals (Carl Poldy)
- What makes headphones different from loudspeakers
- Types of headphones
- Measurements and quality criteria


Monday, May 22, 11:00 — 13:00

T13 - CHOOSING THE RIGHT SOUND REINFORCEMENT SYSTEM FOR THE VENUE

Presenters:
Terry Nelson, Switzerland
Marc de Fouquieres, Dispatch SA
Ben Kok, Consultant
David Norman, Consultant

Abstract:
Even though the world of sound reinforcement has advanced technically a lot over the last decade or so, equipment choices for installations, both fixed and mobile, are often dictated by fashion or "monkey see, monkey do," rather than by sound engineering decisions. ThIS tutorial aims to provide pointers and basics for specifying a sound reinforcement system that is appropriate for a particular venue or application in terms of desired performance and results.


Monday, May 22, 13:30 — 15:30

T14 - HUMAN FACTORS IN THE DESIGN OF AUDIO PRODUCTS AND SYSTEMS

Presenters:
Jeremy Cooperstock, McGill University - Canada
William Martens, McGill University - Canada

Abstract:
This combined workshop/tutorial will begin with a primer on the design of audio products and systems, intended to serve as an introduction to the question of "What audio engineers should know about human factors." From there, we will review existing resources and successes to aid designers and provide an overview of design and testing methodology and discuss a range of user interface paradigms that go beyond the generic graphical UI. [Pending arrangements, the session will also include a series of case studies, describing from both a user and designer perspective the "what" and "why" certain interfaces are successful.

(This tutorial precedes Workshop 12 and the two are conceived as a joint event)


Monday, May 22, 13:30 — 15:30

T15 - METADATA MANAGEMENT

Presenter:
Michael Zimmerman, VCS - Germany

Abstract:
Summary

1. What is metadata?
Metadata (Greek: meta- + Latin: data "information"), literally "data about data", are information about another set of data.
A common example is a library catalogue card, which contains data about the contents and location of a book: They are data about the data in the book referred to by the card. Other common contents of metadata include the source or author of the described dataset, how it should be accessed, and its limitations. Another important type of data about data is the link or relationship between data. Some metadata schemes attempt to embrace this concept, such as the Dublin Core element link. …
2. Types of metadata
Types of metadata addresses the different use cases where metadata is required. These can be databases, programs, files or other scopes for metadata.
3. Metadata standards
Depending on the use case there several standards that can be used to achieve a common description and schema of metadata. There are open standards like Dublin Core that are not related to a certain set of data and can be applied to nearly every data. There are as well specialised sets of metadata e.g. from the EBU, BBC or IRT that address a subset of data or a certain use case. This chapter should give an overview of the existing standards and their purpose.
4. Usage and structure of metadata
a. External description of data, e.g. XML or XML Schema.
b. Internal description of data, e.g. MXF, BWF, TIFF
c. Linking and building relationships between sets of data, e.g. RDF and OWL
d. Adding semantics and meaning to metadata, Semantic Web, ontologies and topic maps
5. Conclusion
The recent possibilities for digital metadata management offer great possibilities accompanied by great risks. The conclusion should state the problems and chances arising from the recent situation.


Monday, May 22, 16:00 — 18:00

T16 - AUDIO SYNCHRONIZATION

Presenters:
Colin Broad, CB Electronics - UK
Daniel Gollety, RS422 - France

Abstract:
This tutorial will present synchronization from Mains to Hi-Def and a history of sound synchronization from film to the present day. The final section will have specific emphasis on hi-definition video, both interlaced and progressive, A-frame, 2-3 pull down, bi-level syncs, tri-level syncs, and slow-PAL.


Tuesday, May 23, 09:00 — 11:00

T17 - QUANTIZATION EFFECTS IN AUDIO SIGNAL PROCESSING

Presenter:
Jamie Angus, University of Salford - UK

Abstract:
In this tutorial we will first present the principles of quantization and the basic effects of finite precision in digital filter implementations (both fixed and floating point) on the audio signal. We explain the effects of finite precision on frequency response (using an audio equalization task as an example). We shall see how different forms of filter structures offer advantages when finite wordlength is considered. We shall also look at how different structures affect the audio signal. Finally we will discuss the effect of different types of dither and the various pitfalls that can occur.


Tuesday, May 23, 09:00 — 11:00

T18 - MICROPHONE PRINCIPLES

Presenter:
Jörg Wuttke, SCHOEPS GmbH

Abstract:
Even a professional with many years of experience might enjoy reviewing the basics of acoustics and the operating principles of microphones. This tutorial also includes a discussion of technical specifications and numerous practical issues.

- Introduction: Vintage technology and the future; physics and emotion; choosing a microphone for a specific application
- Basic acoustics: Sound waves; frequency and wavelength; pressure and velocity; reflection and diffraction; comb filter effects; direct and diffuse sound
- Basic evaluations: Loudness and SPL; decibels; listening tests; frequency/amplitude and frequency/phase response; frequency domain and time domain
- How microphones work: Pressure and pressure-gradient transducers; directional patterns; some special types (boundary layer microphones and shotguns)
- Microphone specifications: Frequency responses (plural!); polar diagrams; free-field vs. diffuse-field response; low- and high-frequency limits; equivalent noise, maximum SPL and dynamic range
- Practical issues: Source and load impedance; powering; wind and breath noise.


Tuesday, May 23, 11:30 — 13:30

T19 - SELECTED TOPICS IN AUDIO FORENSICS

Presenters:
Lise-Lotte Tjellesen, Consultant - Denmark
Durand Begault, Charles M. Salter Associates - San Francisco, CA, USA
Eddy Bøgh Brixen, Consultant - Denmark
Catalin Grigoras, Ministry of Justice - Romania

Abstract:
This 2-hour tutorial will give an overview of the technical procedures and challenges for those working in the field of forensic audio. It is presented by members of the recently formed Technical Committee on Audio Forensics. Topics include an overview of the field; crime scene analysis; speech recording enhancement; authentication; voice analysis; and future areas and challenges.


Tuesday, May 23, 11:30 — 13:30

T20 - MICROPHONE TECHNIQUES in the MULTICHANNEL ERA

Presenters:
Wes Dooley, Audio Engineering Associates - Pasadena, CA USA
Ron Streicher, Pacific Audio-Visual Enterprises - Pasadena, CA USA

Abstract:
This session will discuss many of the factors to be considered when choosing and placing microphones in a multichannel environment — whether it be for sound reinforcement, broadcast, stereo recording, or surround-sound production.

Among the topics to be covered will be:
• What are the priorities?
• How to obtain the best advantage from polar patterns
• Applications and dictates of the inverse square law
• The law of the first wavefront
• The multichannel environment
• Basic pragmatic issues when using microphones
• Corollary issues


Tuesday, May 23, 14:00 — 16:00

T21 - DIGITAL SIGNALS, FILTERS, EQUALIZERS

Presenter:
Jamie Angus, University of Salford - UK

Abstract:
In this tutorial we will first present the principles of sampling and the basic form and function of digital filters, explain the link between impulse response and frequency response, and how it leads to different forms of filtering. Next we will relate that to the similarities and differences between the designs of analog filters and digital filters. Finally, we will discuss some of the filter structures that might be used for audio equaliZers.

   
  (C) 2006, Audio Engineering Society, Inc.