AES London 2011
Tutorial Details

 

Friday, May 13, 11:00 — 12:30 (Room 2)

T1 - What's Your EQ IQ?

Presenter:
Alex Case, University of Massachusetts, Lowell - Lowell, MA, USA

Abstract:
Equalization—we reach for it track by track and mix by mix, as often as any other effect. Easy enough, at first. EQ becomes more intuitive when you have a deep understanding of the parameters, types. and technologies used, plus deep knowledge of the spectral properties and signatures of the most common pop and rock instruments. Alex Case shares his approach for applying EQ and strategies for its use: fixing frequency problems, fitting the spectral pieces together, enhancing flattering features, and more.

Friday, May 13, 16:00 — 18:00 (Room 5)

T2 - Forensic Audio Enhancement

Chair:
Eddy B. Brixen, EBB Consult - Denmark
Panelists:
Andrey Barinov, Speech Technology Center Ltd. - Russia
Robin P. How, Metropolitan Police - London, UK
Gordon Reid, CEDAR Audio Ltd. - Cambridge, UK

Abstract:
Attendees of this tutorial should expect to learn more about issues surrounding the enhancement of forensic audio recordings. Topics will include evidence handling and processing, problem-specific approaches, and the increasing enumeration of GSM modulated voice signals will be presented and discussed. The panel of presenters will include practitioners from law enforcement, software manufacturers, and the scientific community. Before and after audio examples will be demonstrated focusing on specific problems.

Saturday, May 14, 09:00 — 11:00 (Room 2)

T3 - Managing Tinnitus as a Working Audio Professional

Co-chair:
Neil Cherian, Staff Neurologist, Neurological Institute, Cleveland Clinic - Cleveland, OH, USA
Michael Santucci, President, Sensphonics Hearing Conservation - Chicago, IL, USA

Abstract:
Tinnitus is a common yet poorly understood disorder where sound is perceived in the absence of an external source. Significant sound exposure, with or without hearing loss, is the most common risk factor. Tinnitus can be debilitating and can impair quality of life. Anxiety, depression, and sleep disorders are potential consequences. Most importantly for those in the audio industry, it can significantly impair auditory perception.

This tutorial will focus on methods in managing tinnitus in the life of an audio professional. Background information will be provided regarding the basic concept of tinnitus, pertinent anatomy and physiology, audiologic parameters of tinnitus, and an overview of current research. Suggestions for identifying and mitigating high risk behaviors will be covered. Elements of medical and audiologic evaluations of tinnitus will also reviewed.

Saturday, May 14, 11:00 — 13:00 (Room 5)

T4 - In-Ear Monitoring—Past, Present, and Emerging Developments

Presenter:
Stephen Ambrose, Asius Technologies LLC

Abstract:
In the 1970s, sound engineer and performer Stephen Ambrose began pioneering work experimenting with the use of sound isolating earphones as a better means of hearing himself and others during live performances. This form of monitoring was in sharp contrast to the norm at the time, which consisted of an array of dedicated on-stage loudspeakers. Unlike with loudspeakers, performers with isolating earphones could hear their own unique mix, separate from others. As an additional benefit of in-ear monitoring, stage monitors are no longer present to leak output into the house sound system or recording mixes. Early work of Ambrose, with a wide range of well-known performers, established the basics of in-ear monitoring but was challenged by technical barriers and acceptance issues. Today, however, IEM (In-Ear-Monitoring) is in widespread use in many product forms. This tutorial, punctuated with a variety of demonstrations, will cover key developments responsible for the growing acceptance of IEM systems. Additionally, new improvements in the areas of sound isolation, comfort, occlusion effect suppression, and reduction of hearing fatigue, most of which are not yet commercially available, will be discussed.

Sunday, May 15, 09:00 — 10:30 (Room 3)

T5 - Audio over IP—The Basics

Presenter:
Peter Stevens, BBC R&D

Abstract:
With the slow demise of ISDN connections and technology, audio over IP plays an increasingly important role for transporting audio signals from A to B. But this is only one area where audio over IP is used. Many more potential and also existing applications show where we are heading. This tutorial looks at the basics of this technology and pinpoints the advantages as well as the pitfalls and how they can be avoided. Practical examples will underline the concepts.

Sunday, May 15, 09:00 — 10:30 (Room 4)

T6 - No—I Said "Interactive" Music

Co-chair:
Dave Raybould, Leeds Metropolitan University
Richard Stevens, Leeds Metropolitan University

Abstract:
This session will recap a number of common approaches to so called "interactive" music in games through a series of practical in-game demonstrations. We'll discuss what is understood by the terms reactive, adaptive, and interactive and put forward the argument that there's actually very little about the current use of music in games that is truly "interactive." The session will conclude with further examples of some potential future solutions of how we might better align the time-based medium of music with the nonlinear medium of games. This session is intended to be accessible to complete beginners but also thought provoking to old pros!

Sunday, May 15, 11:00 — 12:30 (Room 3)

T7 - Loudness and EBU R 128—A Basic Tutorial of the Why and What

Presenter:
Florian Camerer, ORF – Austrian TV, and Chairman of EBU PLOUD

Abstract:
The EBU recommendation R 128 "Loudness normalization and permitted maximum level of audio signals" is a milestone for establishing a new way to level audio signals. Together with 4 supporting documents it provides the framework for the transition into this new world of leveling audio based on loudness, not peaks. The chairman of the EBU group PLOUD will present R 128 in detail with the help of examples and will give an outlook of how and where the recommendation is already in practical use.

Sunday, May 15, 14:00 — 15:30 (Room 4)

T8 - Video for Audio Engineers

Presenter:
David Tyas, Ikon

Abstract:
Incorporating video into a system can be a good additional source of revenue for audio professionals. It may be simply adding a projector to a theatre or digital signage to augment a VA system. But, with multiple analogue video formats and a transition to an equally confusing array of incompatible digital formats, what connects together and works, how to you convert between the formats and how do you cable and connect.

This short seminar will cover the basics of video with the emphasis on the newer formats, how to interface these and how to incorporate legacy products into systems.

Sunday, May 15, 17:00 — 18:00 (Room 5)

T9 - Power Limitations in Micro Transducers

Presenters:
Bo Rohde Pedersen, Rohde Consulting
Andreas Eberhardt Sørensen, Pulse HVT

Abstract:
Voice coil temperature is typically a HOT topic when discussing power handling of transducers and many manufactures use IEC tests to specify and rank the input power a specific transducer can endure. For micro speakers this situation is also the case but as we are dealing with small coils and thin wires, the need to know and map the thermal factors are even more important.

The factors are the input power, losses, and cooling effects that happen around the voice coil. In this session these factors will be mapped and investigated as well as demonstrated. The first part of the session holds a mini tutorial in using an infrared thermal camera to measure the voice coil temperature as well as a discussion on what must be in focus when setting up such a system to measure micro transducer thermal aspects.

As the temperature increases our tests shows temperatures of up to 200°C in the voice coil and as a result the Rdc (Dc resistance) also increases and the resulting power dissipated decreases. The theoretical thermal model of the transducer will be examined and revised to fit the findings of our research. The used thermal model is of second order and includes nonlinearities to describe the change of the voice coil resistance and the velocity depending cooling of the voice coil.

Under the test are two micro loudspeakers one with a round and one with a rectangular voice coil because the request for higher output and lower resonance frequencies have moved the development from small round coils to larger (relative to the transducer) rectangular coils. The effect of this transition will be documented and the differences as well as the change in the thermal factors will be explained. We want to prove with this session that the trend has an overall positive effect on the overall thermal behavior of the voice coil and the micro transducer’s capability to endure higher input power.

Finally this tutorial should end with a discussion of the factual situation in the micro transducer market that the customers are asking for more power from the driving amplifier and first question is; will the micro transducer be able to handle that input power? Today the amplifiers deliver 1-1.5W in 8Ohm with a build in voltage step up but customers are asking for 2-3W and micro transducers will have to be designed to handle input abuse of this scale. Once this power handling is in place the next question will be; what is gained on the output side of the micro transducer?

Monday, May 16, 09:00 — 10:30 (Room 4)

T10 - Analog Standards—Why Are There So Many?

Presenter:
Sean Davies

Abstract:
"We like standards, that’s why we have so many.” ( variously attributed ).

Although most current audio work is in the digital domain there remains a vast legacy of recordings made in analog, access to which will be required for some considerable time to come. Knowledge is therefore required of the standards applicable to each recording, also the relevant technical literature.

This tutorial will seek to explain how and why the seemingly perverse plurality of standards arose. Fields covered will be: (1) Impedances; why 30 ohms (“R”), 200R, 300R, 600R; (2) Levels; the TU, the Neper, the Decibel; (3) Metering, the VU versus the PPM; (4) Disc recordings; speeds, equalization curves; (5) Tape recordings, speeds, equalization curves; (6) A bag of miscellaneous horrors such as color codes, connectors, and so on.

Monday, May 16, 09:00 — 10:15 (Room 5)

T11 - Roadie—How to Gain and Keep a Career in the Live Music Industry

Presenter:
Andy Reynolds, Owner, LiveMusicBusiness.com

Abstract:
Live music is a huge industry—65 million people around the world attended a concert in 2010, with gross ticket sales of over $3 billion. Every single one of those concerts needs a sound reinforcement system and audio engineers to rig the system and help enhance the sound coming from stage. But how do you gain, and more importantly keep, an audio job in the live music business? How do you differentiate between the different audio roles on a show or event? What specialized skills are relevant to concert audio engineering? This tutorial will examine the role of the audio engineer in modern concert production, look at routes into the industry, offer advice on finding and dealing with clients as well as the technical skills and knowledge that every live audio engineer should be aware of.

Monday, May 16, 10:30 — 12:00 (Room 5)

T12 - You, a Room, and a Pair of Headphones: A Lesson in Binaural Audio

Presenter:
Ben Supper, Focusrite

Abstract:
Stereo recordings that are intended for loudspeaker reproduction do not work properly over headphones. Methods for converting stereo for headphone monitoring have become increasingly elaborate as the availability of digital signal processing has increased. The challenge is to arrive at a stereophonic effect that is as the composer, engineer, and producer intended: stable, with correct perspective, out-of-head localization, and no excessive polarization of the stereo image.

To do these things successfully is not as trivial as may first be assumed. No two listeners are identical; choice of headphones are a matter of taste. Moreover, the illusion needs reverberation, but this must not get in the way of the music.
This tutorial picks a path through the peculiarities of the human hearing system, discussing aspects of our perception of loudspeakers and rooms, and how close we can come to understanding and modeling them completely. Insights will be given into how the human auditory system can be made to believe a virtually-rendered sound field, how to do this cheaply, and why we don't need to cram the entire gamut of reality onto a tiny piece of silicon.

Monday, May 16, 11:00 — 13:30 (Room 4)

T13 - Hot and Nonlinear—Loudspeakers at High Amplitudes

Presenter:
Wolfgang Klippel, Klippel GmbH - Dresden Germany

Abstract:
Nonlinearities inherent in electro-dynamical transducer and the heating of the voice coil and magnetic system limit the acoustical output, generate distortion and other symptoms at high amplitudes. The large signal performance is the result of a deterministic process and predictable by lumped parameter models comprising nonlinear and thermal elements. The tutorial gives an introduction into the fundamentals, shows alternative measurement techniques and discusses the relationship between the physical causes and symptoms depending on the properties of the particular stimulus (test signal, music). Selection of meaningful measurements, the interpretation of the results and practical loudspeaker diagnostic is the main objective of the tutorial, which is important for designing small and light transducers producing the desired output at high efficiency and reasonable cost.

Monday, May 16, 14:00 — 15:30 (Room 3)

T14 - Surround Sound Formats

Presenter:
Hugh Robjohns, Technical Editor and Trainer, Project Manager

Abstract:
After many years of Surround Sound becoming mainstream in the broadcast business, this tutorial looks back at the basics of formats and technical details of surround sound, covering diverse matrix techniques like Dolby Surround, compression systems like dts or Dolby AC-3, as well as broadcast-specific coding schemes like Dolby E. Metadata, lineup tones, and mixing issues with respect to the coding systems will be touched upon, too.

Monday, May 16, 16:00 — 17:30 (Room 2)

T15 - Delay FX—Let's Not Waste Any More Time

Presenter:
Alex Case, University of Massachusetts, Lowell - Lowell, MA, USA

Abstract:
The humble delay is an essential effect for creating the loudspeaker illusion of music that is better than real, bigger than life. A broad range of effects—comb filtering, flanging, chorus, echo, reverb, pitch shift, and more—are born from delay. One device, one plug-in, and you’ve got a vast pallet of production possibilities. Join us for this thorough discussion of the technical fundamentals and production strategies for one of the most powerful signal processes in the audio toolbox: delay.


Return to Tutorials