Saturday, May 20, 09:00 — 10:00 (Salon 7 Vienna)
Austin Moore (Presenter)
When mixing music in the box it has become ever more important to apply subtle and sometimes not so subtle distortion and non-linear processing for coloration, changes to instrumental timbre, and creative effects. There currently exists a wide range of software emulations of classic hardware devices that can be used for non-linear processing and each have their own unique sonic signature. This tutorial investigates the fundamental principles behind distortion and coloration and explores how non-linear processing can be used by the mix engineer. The use of compressors for coloration, the effect of tape and the use of saturation plugins that emulate preamp clipping behavior will be demonstrated in the workshop and a full mix will be processed to provide audio examples that supplement the theory.
This session is presented in association with the AES Technical Committee on Recording Technology and Practices
Saturday, May 20, 09:00 — 10:00 (Salon 4+5 London)
Kevin Gross (Presenter), Andreas Hildebrand (Presenter)
Since the publication of the AES67 Standard on High performance Streaming Audio-over-IP, further work has been started in various standardization organizations. Notably, the SMPTE is working on a suite of standards based on the work of the Joint Task Force for Network Media (JT-NM) and AES67 - SMPTE2110. This session will provide an overview on those standards, highlight their motivation, briefly explain the protocols and network mechanics chosen and how they relate to AES67.
This session is presented in association with the AES Technical Committee on Network Audio Systems
Saturday, May 20, 10:00 — 11:00 (Salon 4+5 London)
Umberto Zanghieri (Presenter)
The history of audio signal processing, starting from the 70s up to today, is marked by several interesting achievements. Signal processing for audio synthesis and computer music research was based on dedicated hardware specifically designed for those applications, but then evolved into the current solutions for general-purpose audio signal processing. The digital audio processing scenario has changed considerably in the last two decades, as new alternatives to traditional DSP devices are now available; furthermore, licensed cores now represent the majority of digital signal processors currently in use. An overview of digital signal processors for audio and options available in the last 40 years will be presented, along with considerations for selecting a suitable DSP device for new projects, and possibly for anticipating trends in future work opportunities.
Saturday, May 20, 10:15 — 12:15 (Salon 7 Vienna)
John Storyk (Moderator), Renato Cipriano (Presenter), Dirk Noy (Presenter)
In an industry deluged by acronyms, Immersive Audio (IA) appears to have leapfrogged the trend. As with Surround Sound, Quad Sound and their various 3.1 - 5.1 - 7.1, etc., iterations, much of the noise made by these new innovations is focused on hype rather than on specific real world listener/viewer needs or actual science. That said, IA systems for producing, distributing and receiving this new sound experience do exist, they work and, they are proliferating. Fundamental design considerations for audio studios need to be maintained, but now with an eye on expanding mix configurations for the mixing engineer, the recording artist and the capturing microphone(s). This class will explore Immersive Audio Studio Design, including acoustics, aesthetics, ergonomics, master planning and future proofing.
Saturday, May 20, 11:00 — 11:45 (Salon 4+5 London)
Jon D. Paul (Presenter)
Engineers assume that all digital transmission must be perfect regardless of source, cable, destination or sample rate. However, in professional, cinema, and broadcast markets there are cases of signal dropouts, link failures, and even transient damage to the transceivers. The causes and solutions are in the design and choice of components and cables, as well as the digital audio physical layer standards (AES 3-4, AES 3id).
The tutorial summarizes 28 years of experience, with observations and test results for many types of cables, revealing huge variations in cable transmission and received spectra and eye- patterns. The cables, interface circuits, and components, such as transformers, and their effect on the quality and reliability of transmission are reviewed. The tutorial includes discussion of balanced to unbalanced conversion, shielding, transient protection and EMI susceptibility and compliance.
Finally, we suggest modifications to the standards, for instance frame sync, rise times, bandwidths, eye pattern masks, and reference designs.
Saturday, May 20, 12:00 — 12:45 (Salon 4+5 London)
Jon D. Paul (Presenter)
This tutorial explores the history of digital audio technology, and the contributions of the great inventors that led to modern digital media. Signal analysis began in 1822 with Fourier’s trans- form. Audio compression started in 1938, with the VOCODER, Homer Dudley’s landmark speech encoder. Dudley invented electronic speech analysis and synthesis, and achieved ten times speech compression.
The onset of World War II created an urgent need for speech encryption of strategic conferences via short wave radio links. In just six months, an unbreakable top secret speech scrambler SIGSALY, was designed at Bell Telephone Laboratories (BTL). Brilliant engineers including Claude Shannon, Henry Nyquist, and Homer Dudley, invented and implemented eleven funda- mental breakthroughs, including PCM, flash A/D conversion and spread spectrum.
The VOCODER and SIGSALY block diagrams, construction and operation are discussed, with their close links to modern audio codecs.
We describe the interesting background of the engineers, inventors and mathematicians who laid the foundations of digital audio; Dudley and Shannon’s work on SIGSALY, Hedy Lamarr’s spread spectrum, and Turing’s Delilah scrambler. A reconstruction of the very first PCM codec ADC, using vintage tubes is described. This tutorial includes unusual photos, vintage music, early VOCODER speech and very rare SIGSALY decrypted speech.
Saturday, May 20, 14:30 — 15:30 (Salon 4+5 London)
Paul Tapper (Presenter)
Streamed audio consumption is becoming an increasingly dominant factor in the music industry. This not only profoundly effects the music business, but also has deep implications for the creative choices made during mixing and mastering. Two major technical considerations are codec compression, and loudness normalization. Codecs produce a better result when the source audio does not have inter-sample clips. Loudness normalization dramatically effects the creative goals of mixing and mastering and the correct auditioning methodology. Producing a “hotter” louder master may well sound better in the studio but will just get attenuated during streaming so the consumer will not hear that increased loudness. Often-times head room and audio quality is sacrificed to achieve this greater studio loudness. These dramatic changes in music consumption patterns necessitate dramatic changes in mixing and mastering practices by audio professionals.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery
Saturday, May 20, 15:30 — 16:30 (Salon 4+5 London)
Tom Ammermann (Presenter)
MPEG-H is still the TV broadcast standard in Korea. Major end user device manufacturers like Samsung and LG are based there. So it's very likely that MPEG-H will be available soon on a lot of devices in the market worldwide which will likely increase the demand for MPEG-H content. The Tutorial will show how to create, edit and export content for that application quick and easy in common production workflows with any DAW using the Spatial Audio Designer.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery
Saturday, May 20, 16:30 — 18:00 (Salon 7 Vienna)
Rosalfonso Bortoni (Presenter)
The tutorial will present the “behind the scenes” of the audio and intercom systems planning and deployment for the Rio 2016 Games competition venues, analyzed from the technical, practical, feasible, and political perspectives. There were 39 competition venues in total, each one with its own dedicated FOH and BOH systems designed to fulfill the needs of the Sports, Results, Broadcast, Press, and overall operations. The close relationship between the audio and intercom systems became an elegant and efficient network, even carrying together data and video signals. The impact of the physical and acoustical aspects of the venues on the project will be discussed as well.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery
Sunday, May 21, 12:45 — 14:30 (Berlin-A)
Eddy B. Brixen (Chair)
Working with audio forensics is serious business. Depending on the work of the forensics engineer, people may eventually end up in prison. This tutorial will present the kind of work related to the field. This covers fields as acoustics, when audio analysis can be a part of the crime scene investigation. Voice acoustics: Who was speaking? Electro acoustics: Checking data on tapes, discs or other data storage media. Recording techniques: Is this recording an original production or is it a copy of others' work. Even building acoustics and psychoacoustics, when the question is raised: Who could hear what? However, the most important 'everyday work' of the audio forensics engineers is cleaning of audio recordings and providing transcriptions. This tutorial presented by practioners that have their ears deep in the matter.
This session is presented in association with the AES Technical Committee on Audio Forensics
Sunday, May 21, 14:15 — 15:45 (Salon 7 Vienna)
Thor Legvold (Moderator), Tom Ammermann (Panelist), Ralph Kessler (Panelist), Hyunkook Lee (Panelist), Daniel Shores (Panelist)
There are a number of competing formats to choose from in Immersive Audio: Auro-3D, Dolby Atmos, DTS-X, NHK 22.2 and more. This tutorial will focus on practical and theoretical considerations when producing content for immersive formats, especially with regards to recording and mixing. We will suggest methodologies as well as explore emerging standards and workflows in order to be positioned to take advantage of the ongoing developments in the field, and discuss market considerations and possible synergy between Immersive Audio and audio for Virtual/Augmented Reality. Panelists with research experience and published productions will discuss and present examples of their work for the various formats, provide valuable insight into the formats available and provide practical advice about how to be “Immersive Audio” ready.
Sunday, May 21, 14:30 — 16:30 (Salon 4+5 London)
Wolfgang Klippel (Presenter)
This tutorial reports the development of new IEC standards addressing conventional and modern measurement techniques applicable to all kinds of transducers, active and passive loudspeakers and other sound reproduction systems. The first standard proposal describes important acoustical measurements for evaluating the generated sound field and signal distortion based on black-box modeling. The second standard is dedicated to the measurement of electrical and mechanical state variables (e.g. displacement), the identification of lumped and distributed parameters (e.g. T/S) and long-term testing to assess power handling, thermal capabilities, product reliability and climate impact. The tutorial gives a deeper insight into loudspeaker modeling, which is the basis for modern measurement techniques, and shows the practical relevance of this knowledge for transducer and system design.
This session is presented in association with the AES Technical Committee on Loudspeakers and Headphones
Sunday, May 21, 15:45 — 16:45 (Salon 7 Vienna)
Tom Ammermann (Presenter)
Music has not a cinematic approach where spaceships flying around the listener. Nonetheless music can become a fantastic spatial listening adventure on speakers as well as with common headphones. An outlook how to create such an adventure and how this could sound is the new Kraftwerk Blu-ray production "Kraftwerk 3D." Production strategies and workflows to create Dolby Atmos and Headphone Surround 3D in current workflows and DAWs will be shown and explained.
Sunday, May 21, 16:30 — 18:00 (Salon 4+5 London)
Christopher J. Struck (Presenter)
This presentation reviews basic the electroacoustic concepts of gain, sensitivity, sound fields, linear and non-linear systems, and test signals for ear-worn devices. The Insertion Gain concept is explained and free and diffuse field target responses are shown. Equivalent volume and acoustic impedance are defined. Ear simulators and test manikins appropriate for Circum-, Supra-, and Intra-aural and insert earphones are presented. The salient portions of the ANSI/ASA S3.7 and IEC 60268-4 standards are reviewed. Examples of Frequency Response, Left-Right Tracking, Insertion Gain, Distortion, and Impedance are shown. The basic concepts of Noise Canceling devices are also presented.
This session is presented in association with the AES Technical Committee on Loudspeakers and Headphones
AES Members can watch a video of this session for free.
Sunday, May 21, 17:00 — 18:00 (Salon 7 Vienna)
Nuno Fonseca (Presenter)
A little confused with all the new 3D formats out there? Although most 3D audio concepts already exist for decades, the interest in 3D audio has increased in recent years, with the new immersive formats for cinema or the rebirth of Virtual Reality (VR). This tutorial will present the most common 3D audio concepts, formats, and technologies, allowing you to finally understand buzzwords like Ambisonics/HOA, Binaural, HRTF/HRIR, channel-based audio, object-based audio, Dolby Atmos, Auro 3D/Auromax, among others.
Special Thanks: In this session we are using headphones from http://silentdisco.de
AES Members can watch a video of this session for free.
Monday, May 22, 09:45 — 10:45 (Salon 4+5 London)
Andreas Hildebrand (Chair)
The AES67 Standard on High performance Streaming Audio-over-IP Interoperability was published in September 2013. Since then, first applications with AES67-compatible devices have been projected and put into operation. This session will demonstrate a working AES environment live and give insight on device and network configuration, management and monitoring.
This session is presented in association with the AES Technical Committee on Network Audio Systems
Monday, May 22, 10:45 — 12:15 (Salon 4+5 London)
Gabriele Bunkheila (Presenter)
High-level programming languages are frequently used by DSP and Audio Engineers for developing audio products and plugins for use in music production. These languages allow designers to rapidly create and evaluate new audio processing ideas which later are targeted for implementation in commercial audio products. In this workflow, fine-tuning the algorithm design is an important component. In this tutorial, we will use our industry knowledge to summarize the best programming practices adopted by audio companies to enable the reuse of research code directly for real-time prototypes. We will show a number of examples, tips, and tricks to minimize latency, maximize efficiency, run in real-time on a PC, and generate native VST plugins for testing and prototyping.
This session is presented in association with the AES Technical Committee on Recording Technology and Practices
AES Members can watch a video of this session for free.
Monday, May 22, 15:00 — 16:00 (Salon 7 Vienna)
Gavin Kearney (Presenter), Mariana Lopez (Presenter)
Audio Description (AD) is a pre-recorded verbal commentary that is added to a film or television program to make visual elements clearer to visually impaired audiences. One of the disadvantages of AD is that the addition of a layer of verbal commentary means that elements from the original soundtrack are masked and valuable information on the film as well as part of the intended engagement is lost. The Enhancing Audio Description project proposes to use binaural audio to reduce the number of verbal descriptions used for accessibility by using accurately placed sound elements to give audiences information on the position of characters and objects in space as well as information on cinematic elements such as high and low camera angles and shots.
Special Thanks: In this session we are using headphones from http://silentdisco.de
Monday, May 22, 15:30 — 16:30 (Salon 4+5 London)
David Scheirman (Presenter)
A timeline of live-performance system design evolution over four decades will be highlighted. In addition to a historical review of control & monitoring processes, this presentation bridges the gap from control-only networks to network digital audio, noting migration paths to beam-steerable line array elements that are now described as network endpoint devices. Tutorial also presents various loudspeaker enclosure and multi-box array topologies in use over time, as a broad-spectrum overview of technical developments taking place since the AES 6th International Conference (Sound Reinforcement, Nashville, 1988) and the AES 13th International Conference (Computer-controlled sound systems, Dallas, 1994). Each of these landmark events included content that foreshadowed the development of today’s modern high-powered loudspeaker arrays that incorporate beam-steering technology. Recently-emerging trends will be examined, and potential future developments contemplated. Of potential interest to sound reinforcement technicians and system operators, installed-system designers, rental sound service company providers, and live-sound equipment product development engineers.
AES Members can watch a video of this session for free.
Monday, May 22, 16:00 — 18:00 (Salon 7 Vienna)
Tom Ammermann (Presenter), Robert Schulein (Presenter)
Audio has always been an integral element in the creation of more realistic audio-visual entertainment experiences. With the evolution of personal motion tracking 3D imaging technologies, entertainment experiences are possible with a higher degree of cognition, commonly referred to as virtual reality. The quest for more engaging user experiences has raised the challenge for more compelling audio. Elements of binaural hearing and sound capture have come to play a central role in existing and evolving production techniques. This tutorial will cover the elements of binaural audio as they relate to producing compelling entertainment and educational content for virtual reality applications. Specific areas to be covered with support audio and 3D anaglyph video demonstrations include: audio for games, music entertainment, radio drama, and music education. Audio production tools including binaural and ambisonic capture microphone systems, with and without motion capture will be presented and demonstrated. The tutorial will also cover aspects relating to creating high-quality content with professional workflows using common tools and DAWs.
Special Thanks: In this session we are using headphones from http://silentdisco.de
Tuesday, May 23, 09:00 — 10:30 (Salon 4+5 London)
Gregor Höhne (Presenter)
Progressing miniaturization and portability introduce new challenges for loudspeaker design. However, demands like higher efficiency and reduced weight can be achieved by combining digital signal processing and optimized transducer design. The tutorial gives an introduction into nonlinear adaptive loudspeaker control and how it can be used to equalize, stabilize, linearize, and actively protect transducers. This covers the physical background of nonlinear loudspeaker behavior and the resulting demands on a control algorithm with a strong focus on the practical implementation. The discussion includes resulting requirements for amplifier and transducer design, like power demands and advantages of dc-coupling, as well as techniques to evaluate the system performance.
Tuesday, May 23, 10:30 — 12:15 (Salon 4+5 London)
Jamie Angus (Presenter)
Coded Audio is an essential part of modern audio distribution, such as the internet, film, etc. But how does it work? What is it about a signal that can allow you to reduce its data rate without loss, as in “Lossless Coding”? How can one take advantage of human perception when one does lossy coding such as mpeg? This tutorial will use both video and audio coding to explain the characteristics that allow one to encode such signals at a reduced data rate without any loss of fidelity. It will then go on to explain how one can go about reducing the data rate with the minimum of perceptible distortion.
AES Members can watch a video of this session for free.
Tuesday, May 23, 11:00 — 11:45 (Salon 7 Vienna)
Hassan Abbas Shakir (Presenter)
Beyond copyright, there are many other types of intellectual property rights (IPR) that impact the audio industry. This tutorial explores these IPR, e.g., patents, copyrights, trademarks, trade secrets, and trade dress. Explanations will be provided for what each IPR protects and does not protect, how to obtain each, and how to avoid infringing IPR rights of others. Advice will be provided for the strategic development of a portfolio of IPR rights and incorporation of IPR rights in contractual services. To the extent possible, an opportunity for questions will be provided.
Tuesday, May 23, 13:00 — 14:30 (Salon 7 Vienna)
Lidwine H" (Presenter), Matthieu Parmentier (Presenter)
A tutorial to consider the opportunities of Object-Based Audio for broadcasters: new contents, formats, workflows and broadcasting strategies.
Object-Based Audio for broadcasters: what, why, how? New story-tellings, renewed tools, this tutorial will focus on the whole production chain to better serve the next generation of contents and enhance the end-user experience.
This session is presented in association with the AES Technical Committee on Broadcast and Online Delivery
Tuesday, May 23, 14:30 — 15:15 (Salon 4+5 London)
Balázs Bank (Presenter)
Digital filters are often used to model or equalize acoustic or electroacoustic transfer functions. Applications include headphone, loudspeaker, and room equalization, or modeling the radiation of musical instruments for sound synthesis. As the final judge of quality is the human ear, filter design should take into account the quasi-logarithmic frequency resolution of the auditory system. This tutorial presents various approaches for achieving this goal, including warped FIR and IIR, Kautz, and fixed-pole parallel filters, and discusses their differences and similarities. It also shows their relation to fractional-octave smoothing, a method used for displaying transfer functions. With a better allocation of the frequency resolution and filtering resources, these methods require a significantly lower filter order compared to straightforward FIR and IIR designs at a given sound quality.
This session is presented in association with the AES Technical Committee on Loudspeakers and Headphones