AES London 2010 Saturday, May 22, 10:30 — 12:30
W1 - Audio Network Control Protocols
Richard Foss, Rhodes University - Grahamstown, South Africa
John Grant, Nine Tiles
Robby Gurdan, Universal Media Access Networks (UMAN)
Stefan Ledergerber, Harman Pro Audio
Philip Nye, Engineering Arts
Andy W. Schmeder, University of California, Berkeley - Berkeley, CA, USA
Digital audio networks have solved a number of problems related to the distribution of audio within a number of contexts, including recording studios, stadiums, convention centers, theaters, and live concerts. They provide cabling ease, better immunity to interference, and enhanced control over audio routing and signal processing when compared to analog solutions. There exist a number of audio network types and also a number of audio network protocols that define the messaging necessary for connection management and control. The problem with this range of protocol solutions is that a large number of professional audio devices are being manufactured without regard to global interoperability. In this workshop a panel of audio network protocol experts will share the features of audio network protocols that they are familiar with and what features should appear within a common protocol.
Saturday, May 22, 14:00 — 16:00 (Room C2)
W2 - AES42 and Digital Microphones
Helmut Wittek, SCHOEPS Mikrofone GmbH - Karlsruhe, Germany
Claudio Becker-Foss, DirectOut - Germany
Gregor Zielinski, Sennheiser - Germany
The AES42 interface for digital microphones is not yet widely used. This can be due to the relatively new appearance of digital microphone technology, but also a lack of knowledge and practice with digital microphones and the corresponding interface. The advantages and disadvantages have to be communicated in an open and neutral way, regardless of commercial interests, on the basis of the actual need of the engineers.
Along with an available “White paper” about AES42 and digital microphones, which is aimed a neutral in-depth information and was compiled from different authors, the workshop intends to bring to light facts and prejudices on this topic.
Saturday, May 22, 16:00 — 18:00 (Room C6)
W3 - Calibration of Analog Reproducing Equipment for Digitization Projects
George Brock-Nannestad, Patent Tactics - Denmark
Sean W. Davies, S.W. Davies Ltd. - UK
Andrew Pearson, British Library Sound Archive - UK
Peter Posthumus, Post Sound Ltd.
Analog reproduction equipment sees its last use in the transfer of audio content to digital files. In order to obtain the optimal signal from the carrier (tape and disc) the equipment has to be adjusted. In order to obtain traceability the equipment has to be calibrated. Generations of technicians that did this on a regular basis are vanishing quickly, and making available calibration materials, such as the AES-S001-064 Coarse-groove Calibration disc, is not enough when the know-how for its use is lacking. This workshop aims to redress this situation, in that it provides a firm theoretical underpinning of practical demonstrations, in which the basic principles of calibration and alignment will be discussed along with logging the activity. Keywords are: calibration and alignment of analog tape reproducers, including equipment for tapes that have been encoded, such as Dolby-A and Dolby SR. Furthermore calibration and alignment of gramophone record reproducing equipment will be discussed. Active discussion is encouraged and may spill over into optical sound as found on films.
Sunday, May 23, 09:00 — 11:00 (Room C1)
W4 - Blu-ray as a High Resolution Audio Format for Stereo and Surround
Stefan Bock, msm-studios GmbH - Munich, Germany
Simon Heyworth, SAM - UK
Morten Lindberg, 2L - Norway
Johannes Müller, msm-studios - Germany
Crispin Murray, Metropolis - UK
Ronald Prent, Galaxy Studios - Belgium
The decision for the Blu-ray disc as the only HD packaged media format also offering up to 8 channels of uncompressed high resolution audio has at least eliminated one of the obstacles of getting high-res surround sound music to the market. The concept of utilizing Blu-ray as a pure audio format will be explained, and Blu-ray will be positioned as successor to both SACD and DVD-A. The operational functionality and a double concept of making it usable both with and without screen will be demonstrated by showing a few products that are already on the market.
Sunday, May 23, 11:00 — 12:45 (Room C6)
W5 - Applications of Time-Frequency Processing in Spatial Audio
Ville Pulkki, Aalto University School of Science and Technology - Aalto, Finland
Christof Faller, Illusonic LLC - Lausanne, Switzerland
Jean-Marc Jot, DTS Inc. - CA, USA
Christian Uhle, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
The time-frequency resolution of human hearing has been taken into account for a long time in perceptual audio codecs. Recently, the spatial resolution of humans has also been exploited in time-frequency processing. This has already lead into some commercial applications. This workshop covers the capabilities and incapabilities of human spatial hearing and the audio techniques that exploit these features. Typically the techniques are based on estimation of directional information for each auditory frequency channel, which information is then used in further processing. The application areas discussed in the workshop include, at least, audio coding, microphone techniques, upmixing, directional microphones, and studio effects.
Sunday, May 23, 14:00 — 15:45 (Room C2)
W6 - How Do We Evaluate High Resolution Formats for Digital Audio?
Hans van Maanen, Temporal Coherence - The Netherlands
Milind Kunchur, University of South Carolina - SC, USA
Thomas Sporer, Fraunhofer Institue for Digital Media Technology IDMT - Ilmenau, Germany
Menno van der Veen, Ir. Bureau Vanderveen
Wieslaw Woszczyk, McGill University - Montreal, Quebec, Canada
Since the introduction of the High Resolution Formats for Digital Audio (e.g. SACD, 192 kHz / 24 bit), there has been discussion about the audibility of these formats, compared to the CD format (44.1 kHz / 16 bit). What difference does high sample rate and bit depth make in our perception? Can we hear tones above 20 kHz? Can we perceive quantization errors in 16-bit audio? Does high sample rate make a difference in our phase resolution? Are we even asking the right questions? Controlled, scientific listening tests have mostly given ambiguous or inconclusive results, yet a large number of consumers, using "high-end" audio equipment, prefer the sound from the "high resolution" formats over the CD. The workshop will start with introductory notes from the panel members, discussing the differences between "analog" and first-generation digital formats, address some of the paradoxes of the CD format, present results on "circumstantial" evidence and subjective testing, show results on the audibility of the human hearing, which cannot be explained by the commonly accepted 20 kHz upper limit and discuss the problems and pitfalls of "scientific" listening tests, where possible illustrated with demonstrations.
These introductory notes should provoke a discussion with the audience about the audibility of the improvements of the "high resolution" formats We attempt to reach consensus, where possible, regarding what is known and what is not with respect to our ability to perceive the differences between standard and high resolution audio. We further discuss the paradigms of testing for evaluating the quality and perception of high resolution audio, how to structure the tests, how to configure the testing environment, and how to analyze the results.
The outcome of the workshop should also be to find the way forward by identifying the bottlenecks which—at this moment—hamper the further implementation of the "high resolution" formats for "high-end" audio as these formats create an opportunity for the audio industry as a whole: better sources stimulate the development of better reproduction systems.
Sunday, May 23, 16:00 — 19:00 (Room C6)
W7 - The Work of Forensic Speech and Audio Analysts
Peter French, JP French Associates, University of York
Philip Harrison, JP French Associates, University of York
This workshop provides an illustrated introduction to the various categories of work carried out by forensic speech and audio analysts. In attempting to convey the variety, scope, and flavor of the work, methodological issues, principles, and developments are discussed in respect of key areas of forensic investigation. These include speaker profiling, the analysis of recordings of criminal speech samples in order to assemble a profile of regional, social, ethnic and other information concerning speakers; speaker comparison testing, the comparison of recorded voice samples using auditory-phonetic and acoustic methods as well as automatic systems to assist with identification; content analysis, the use of phonetic and acoustic techniques to decipher contentious areas of evidential recordings; procedures for evaluating ear-witness testimony; sound propagation testing, the reconstructions of crime scenes and the measurement of sound from given points across distances to assess what might have been heard by witness in various positions; recording authentication, the examination of evidential recordings for signs of tampering or falsification. Some of the points developed are exemplified by recordings and analyses arising from important and high profile criminal cases. The presentation will be followed by a discussion session where questions and points are invited from those attending.
Monday, May 24, 09:00 — 11:00 (Room C2)
W8 - Interacting with Semantic Audio—Bridging the Gap between Humans and Algorithms
Michael Hlatky, University of Applied Sciences, Bremen - Bremen, Germany
Masataka Goto, Media Interaction Group, National Institute of Advanced Industrial Science and Technology - Tsukuba, Japan
Anssi Klapuri, Queen Mary University of London - London, UK
Jörn Loviscach, Fachhochschule Bielefeld, University of Applied Sciences - Bielefeld, Germany
Yves Raimond, BBC Audio & Music Interactive - London, UK
Technologies under the heading Semantic Audio have undergone a fascinating development in the past few years. Hundreds of algorithms have been developed; first applications have made their way from research into possible mainstream application. However, the current level of awareness among prospective users and the amount of actual practical use do not seem to live up to the potential of semantic audio technologies. We argue that this is more an issue concerning interface and interaction than a problem concerning the robustness of the applied algorithms or a lack of need in audio production. The panelists of this workshop offer ways to improve the usability of semantic audio techniques. They look into current applications in off-the-shelf products, discuss the use in a variety of specialized applications such as custom-tailored archival solutions, demonstrate and showcase their own developments in interfaces for semantic audio, and propose future directions in interface and interaction development for semantic audio technologies ranging from audio file retrieval to intelligent audio effects.
The second half of this workshop includes hands-on interactive experiences provided by the panel.
Monday, May 24, 09:00 — 10:30 (Room C6)
W9 - Redundant Networks for Audio
Umberto Zanghieri, ZP Engineering srl - Rome, Italy
Marc Brunke, Optocore - Germany
David Myers, Audinate - Australia
Michel Quaix, Digigram - France
Al Walker, KlarkTeknik/Midas - UK
Redundancy within a digital network for audio transport has specific requirements when compared to redundancy for usual data networks, and for this reason several practical implementations offer dual network ports. Ideas and solutions from current formats will be presented, detailing requirements specific to usage cases, such as digital audio transport for live events or for install. The discussion will include also non-IP audio networks. Different topologies and switchover issues will be presented, and practical real-world examples will be shown.
Monday, May 24, 11:00 — 13:00 (Room C6)
W10 - A Curriculum for Game Audio
Richard Stevens, Leeds Metropolitan University
Dan Bardino, Creative Services Manager, Sony Computer Entertainment Europe Limited
Andy Farnell, Author of Designing Sound
David Mollerstedt, DICE - Sweden
Dave Raybould, Leeds Metropolitan University - UK
Nia Wearn, Staffordshire Univerity - Staffordshire, UK
How do I get work in the games industry? Anyone involved in the discussions that follow this question in forums, conferences, and workshops worldwide will realize that many students in Higher Education who are aiming to enter the sector are not equipped with the knowledge and skills that the industry requires. In this workshop a range of speakers will discuss, and attempt to define, the various roles and related skillsets for audio within the games industry and will outline their personal route into this field. The panel will also examine the related work of the IASIG Game Audio Education Working Group in light of the recent publication of its Game Audio Curriculum Guidelines draft. This will be a fully interactive workshop inviting debate from the floor alongside discussion from panel members in order to share a range of views on this important topic.
Monday, May 24, 14:00 — 15:45 (Room C1)
W11 - Surround for Sports
Martin Black, Senior Sound Supervisor & Technical Consultant, BSkyB - UK
Peter Davey, Audio Quality Supervisor at Beijing 2008 Olympics and Vancouver 2010 Olympics
Ian Rosam, 5.1 Audio Quality Supervisor for FIFA World Cup, Euro 2008, Beijing 2008 and Vancouver 2010 Olympics
Surround for Sports is an increasingly important area of multichannel audio. It is a de-facto standard for large-scale productions like the Olympics or the football world and European championships.
The presenters, both experienced audio supervisors for such events, will touch on a variety of subjects regarding surround sound design for sports and many practical issues like: crowd/audience; field of play FX; game sounds, e.g., ball kicks; competitors, e.g., curling; referees/umpires, e.g., rugby/tennis; board sounds, e.g., darts/basketball; scoring/timing, e.g., fencing buzzers, boxing time bell; commentators/reporters out of vision; commentators/presenters/reporters in vision. Where those elements should be in a 5.1 mix will be discussed as well as: use of center channel, use of the LFE, HDTV stereo fold-down in a set top box, Dolby E, metadata, bass management, and last but not least localization of sounds with 5.1 and human hearing.
Monday, May 24, 16:00 — 18:00 (Room C6)
W12 - Audio History, Archiving, and Restoration
Sean W. Davies, S.W. Davies Ltd. - UK
Ted Kendall, Sound Restorer
John Liffen, Curator of Communications at The Science Museum - London, UK
Will Prentice, The National Sound Archive, British Library
Nadia Walaskowitz, Phonogram Archive - Vienna, Austria
This workshop covers: (a) the preservation of historic audio equipment, both as exhibits and as working apparatus required for playback of historic formats; (b) the arrangement and management of archives/collections; (c) the preservation of the audio content for distribution.
Monday, May 24, 16:00 — 18:00 (Room C2)
W13 - Loudness in Broadcasting—The New EBU Recommendation R128
Andrew Mason, BBC R&D
Jean-Paul Moerman, Salzbrenner Stagetec Media Group - Buttenheim, Germany
Richard van Everdingen, Dutch Broadcasting Loudness Committee
The EBU group P/LOUD is approaching the final stage of its work that will
result in recommendations that will have a profound effect on any audio
production in broadcasting. The gradual switch from peak to loudness normalization
combined with a new maximum true peak level and the usage of the descriptor
“loudness range” allows for the first time to fully characterize the audio part
of a program. More importantly it has the potential to solve the most frequent
complaint of the listeners, that of severe level inconsistencies. This is the first time that the new EBU loudness recommendation R128 is presented in detail, alongside a detailed introduction to the subject as well as practical case studies.
Tuesday, May 25, 09:00 — 10:45 (Room C6)
W14 - 10 Things to Get Right in PA and Sound Reinforcement
Peter Mapp, Peter Mapp and Associates - Colchester, UK
Mark Bailey, Harman Pro Audio
Jason Baird, Martin Audio
Chris Foreman, Community Loudspeakers
Ralph Heinz, Renkus Heinz
Glenn Leembrugen, Consultant
The workshop will discuss the 10 most important things to get right when designing / operating sound reinforcement and PA systems. However, there are many more things to consider than just the ten golden rules, and that the order of importance of these often changes depending upon the venue and type of system. The workshop aims to provide a practical approach to sound systems design and operation and will be illustrated with many practical examples and case histories. Each workshop panelist has many years practical experience and between them can cover just about any aspect of sound reinforcement and PA systems design, operation, and technology. Come along to a workshop that aims to answer questions you never knew you had but of course to find out the ten most important ones you will need to attend the session.
Tuesday, May 25, 09:00 — 10:45 (Room C1)
W15 - Single-Unit Surround Microphones
Eddy B. Brixen, EBB-Consult - Smørum, Denmark
Mikkel Nymand, DPA Microphones
Mattias Strömberg, Milab - Helsingborg, Sweden
Helmut Wittek, SCHOEPS Mikrofone GmbH
The workshop will present available single-unit surround sound microphones in a kind of shoot out. There are a number of these microphones available and more units are on their way. These microphones are based on different principles. However, due to their compact sizes there may/may not be restrictions to the performance. This workshop will present the different products and the ideas and theories behind them.
Tuesday, May 25, 11:00 — 13:00 (Room C6)
W16 - MPEG Unified Speech and Audio Coding
Ralf Geiger, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Philippe Gournay, University of Sherbrooke / VoiceAge
Max Neuendorf, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Lars Villemoes, Dolby
Recently the ISO/MPEG standardization group has launched an activity on unified speech and audio coding (USAC). This codec aims at achieving consistently high quality for speech, music, and mixed content over a broad range of bit rates. This outperforms current state-of-the-art coders. While low bit rate speech codecs focus on efficient representation of speech signals, they fail for music signals. On the other hand, generic audio codecs are designed for any kind of audio signals, but they tend to show unsatisfactory results for speech signals, especially at low bit rates. The new USAC codec unifies the best of both systems. This workshop provides an overview of architecture, performance, and applications of this new unified coding scheme. Experts in the fields of speech coding and audio coding will present details on the technical solutions employed to top-notch unified speech and audio coding.
Tuesday, May 25, 14:00 — 15:30 (Room C1)
W17 - 5.1 into 2 Won't Go—The Perils of Fold-Down in Game Audio
Michael Kelly, Sony Computer Entertainment Europe
Richard Furse, Blue Ripple Sound Limited - UK
Simon Goodwin, Codemasters - UK
Jean-Marc Jot, DTS Inc. - CA, USA
Dave Malham, University of York - York, UK
One mixing solution cannot suit mono, stereo, headphone, and various surround configurations. However games mix and position dozens of sounds on the fly, so they can readily make a custom mix, rather than rely upon downmixing or upmixing that penalizes listeners who do not use the default configuration. This workshop explains practical solutions to problems of stereo speaker, headphone, and mono compatibility (including Dolby ProLogic and 2.1 set-ups) without detriment to surround. It notes differences between the demands of games and cinema for surround and the challenges of reconciling de facto (quad+2) and theoretical (ITU 5.1) standard loudspeaker layouts and playing 5.1 channel content on a 7.1 loudspeaker system.