AES Dublin Tutorial Details

AES Dublin 2019
Tutorial Details

Wednesday, March 20, 09:15 — 10:45 (Liffey Hall 2)

T01 - Bluetooth Audio and the Car

Chair:
Jonny McClintock, Qualcomm Technology International Ltd. - Belfast, Northern Ireland, UK
Panelists:
Francesco Condorelli, Qualcomm Technologies International GmbH - Munich, Germany
Jayant Datta, Audio Precision - Beaverton, OR, USA
Richard Hollinshead, MQA - Huntingdon, UK

The latest cars have advanced sound systems and many people use their mobile phones to access music while driving yet relatively little attention has been given to the wireless audio link between the phone and the car sound system. This tutorial will describe the use of Bluetooth with enhanced audio to connect in-car entertainment systems and the benefits this will bring to users. These developments will ensure that drivers and passengers can enjoy CD quality, or better, without using cables, audio and rear seat passengers can enjoy synced gaming audio.

Many drivers are spending more time in the car with some commuters regularly stuck for hours in rush hour traffic. Sound and in-car entertainment systems are therefore very attractive to drivers and a valuable differentiator for car manufacturers. Advanced sound systems, which in some cars cost several thousand dollars and include more than 10 speakers, are a standard feature on some new cars, an important upgrade option on most cars, and represent a significant aftermarket opportunity.

The quality of sound enjoyed by the driver and passengers depends not only on the quality of the sound system but also the quality of the audio source and the connectivity between the audio source and the sound system. Drivers have relied on Satellite or DAB radio and multidisc CD systems to provide audio in cars. This is changing in a world where Internet radio, streamed music and playlists stored on mobile phones are the audio sources chosen by users at home, on the move and in their cars. Bluetooth connectivity has been available in cars for 15 years and all smart phones support Bluetooth audio. Most new cars now come with Bluetooth connectivity and the rest provide Bluetooth as an option.

 
 

Wednesday, March 20, 09:45 — 10:45 (Liffey A)

T02 - Dance Music Training—The Unregulated Industry and CPD—Analytical Listening Skills

Presenters:
Alexandra Bartles, Dance Music Production - Manchester, UK
Rick Snoman, Dance Music Production - Manchester, UK

Electronic Dance music has grown into a multi-billion dollar industry but the training remains unregulated. The standards of training material vary greatly, ranging from entertainment to educational. Anyone can set up an online tutorial delivery service and there are no regulations within the industry. This has seen the creation of organizations that have no proven track record. Students find themselves working through a minefield of misinformation, learning and practicing misguided dogmas and not learning how to be innovative but instead how to copy. EMTAS developed a Code of Practice with the help of professionals in the dance music industry.

In this session, we look at the industry and how the code of practice will improve educational standards and help to create a generation of individuals that can produce original electronic dance music. We will then perform an example of continued professional development which is a requirement within the code of practice.

Critical and Analytical Listening (CPD)

The most fundamental skill that all music producers need is the ability to listen both critically and analytically. While there are numerous blogs and software programs, and phone apps, teaching us how to critically listen. Little information is given on how to listen with an analytical ear. Yet without analytical listening, we cannot create an emotional connection with our listening audience.

 
 

Wednesday, March 20, 11:00 — 12:30 (Liffey A)

Photo

T03 - Snare Drum Strategies: Record and Mix for Maximum Impact

Presenter:
Alex Case, University of Massachusetts Lowell - Lowell, MA, USA

The snare drum demands careful and constant attention—from our ears and our gear. With a dynamic range from whispering brushes to cannon fire, the snare drum challenges us to know our craft. Musical acoustics, room acoustics, and psychoacoustics guide the development of effective microphone techniques and signal processing strategies. Through informed control of spectrum, envelope, and image, the snare can be counted on to drive your music forward.

 
 

Wednesday, March 20, 11:00 — 12:30 (Liffey Hall 2)

T04 - Digital Filters, Filter Banks, and Their Design for Audio Applications

Presenter:
Gerald Schuller, Ilmenau University of Technology - IImenau, Germany; Fraunhofer Institute for Digital Media technology (IDMT) - Ilmenau, Germany

This tutorial will teach how to design "Finite Impulse Response" and "Infinite Impulse Response" filters for audio applications, in theory and practice and will give examples in the popular Open Source programming language Python. Then it will go on to show how to design and use filter banks. Examples will be the "Modified Discrete Cosine Transform" (MDCT) filter bank, the (Integer-to-Integer) "IntMDCT", and Low Delay filter banks, which are widely used in MPEG audio coding standards. Further it will show digital filters as predictors for predictive coding, with applications in MPEG Lossless coding standards. Finally it will show how to implement filter banks as convolutional neural networks, which makes them "trainable", and eases the use of GPU's. This has applications for instance in audio source separation.

AES Technical Council This session is presented in association with the AES Technical Committee on Signal Processing

 
 

Wednesday, March 20, 11:00 — 12:00 (Liffey Hall 1)

Photo

T05 - Teaching the Recording Studio Beyond the Classroom

Presenter:
Gabe Herman, The University of Hartford, The Hartt School - Hartford, CT, USA

One of the biggest challenges in teaching the recording studio environment is facilitating adequate access to learning labs and studio facilities to practice materials covered in class. This event will explore how innovative software, traditional studio tools, and new interactive programs can provide students new opportunities to continue applied learning outside of the classroom. Included in this presentation are critical listening in headphones and/or non-ideal loudspeaker monitoring environments with the use of innovative acoustical correction software, the use of plugins for learning the esoteric performance characteristics of outboard gear, as well as a proposed new approach to teaching and learning large-format analog mix consoles through HTML code based on research conducted at the Hartt School at the University of Hartford.

 
 

Wednesday, March 20, 14:15 — 15:15 (Liffey Hall 2)

Photo

T06 - Strategies in 3D Music Production (Upmixing vs Productions Aimed for 3D)

Presenter:
Lasse Nipkow, Silent Work LLC - Zurich, Switzerland

More and more music productions in 3D / immersive audio are realized native. In contrast to upmix strategies, thereby the potential of 3D / immersive audio can be fully achieved musically, and it can be used to create new forms of creativity. However, it is not trivial to deal with the large number of audio channels meaningfully.

Lasse Nipkow presents his long-term findings and results with 3D audio in this event. He has been making 3D / immersive audio productions, recordings, and presentations for more than a decade at various conventions all over the world. During this session he will explore main aspects of spatial audio psychoacoustics and production strategies. Various sound and video examples will be shown to illustrate the context.

 
 

Wednesday, March 20, 14:15 — 15:15 (Liffey Hall 1)

T07 - LTI Filter Design and Implementation 101

Presenter:
Andrew Stanford-Jason, XMOS Ltd. - Bristol, UK

LTI filter design is a well documented process but the implementation specifics are rarely covered. In this workshop we will cover how to realize a filter from its specification to the instructions that will execute. We will work through a PDM to PCM conversion example covering how to partition the decimation process to maximize the performance of your hardware while achieving your desired filter performance. The intention is to clearly illustrate the trade-offs a designer has to consider that are typically outside of the specification, how to test for them and how to control them.

Topics include: • Filter design basics: from filter specification to realization; • Designing for floating and fixed point arithmetic; • Optimizing for your hardware; • Characterizing your filter: theory and testing; • Decimation/Interpolation: polyphase filters; • Frequency domain implementation; • Example: PDM to PCM conversion.

AES Technical Council This session is presented in association with the AES Technical Committee on Signal Processing

 
 

Wednesday, March 20, 15:15 — 16:15 (Liffey Hall 2)

Photo

T08 - Modern Music Production for Immersive / 3D Formats

Presenter:
Lasse Nipkow, Silent Work LLC - Zurich, Switzerland

More and more music productions in 3D / immersive audio are realized native. In contrast to upmix strategies, thereby the potential of 3D / immersive audio can be fully achieved musically, and it can be used to create new forms of creativity. However, it is not trivial to deal with the large number of audio channels and with new binaural headphone virtualizations meaningfully.

Lasse Nipkow present his long-term findings and results with 3D audio in the workshop. This is a listening session. A series of impressive 3D audio productions will be presented.

 
 

Wednesday, March 20, 15:15 — 16:15 (Liffey Hall 1)

Photo

T09 - Microphones—Can You Hear the Specs? A Master Class

Presenter:
Eddy Bøgh Brixen, EBB-consult - Smørum, Denmark; DPA Microphones - Allerød, Denmark

There are numerous microphones available to the audio engineer. It's not easy to compare them on a reliable basis, often the choice of the model is made on the basis of experience or perhaps just habits—or just because it looks nice. Nevertheless, there is valuable information in the microphone specifications. This master class held by well-known microphone experts of leading microphone manufacturers demystifies the most important microphone specs and provides the attendees with up-to-date information on how these specs are obtained and can be interpreted. Furthermore, many practical audio demonstrations are given in order to help everyone to understood how the numbers relate to the perceived sound.

AES Technical Council This session is presented in association with the AES Technical Committee on Microphones and Applications

 
 

Wednesday, March 20, 16:30 — 18:00 (Liffey Hall 1)

T10 - How to Rate the Maximum Loudspeaker Output SPLmax?

Presenters:
Steven Hutt, Equity Sound Investments - Bloomington, IN, USA
Wolfgang Klippel, Klippel GmbH - Dresden, Germany
Peter Mapp, Peter Mapp Associates - Colchester, Essex, UK
Alexander Voishvillo, JBL/Harman Professional Solutions - Northridge, CA, USA

The new IEC standard IEC 60268-21 defines the maximum sound pressure level SPLmax generated by the audio device under specified condition (broadband stimulus, 1-m distance, on-axis under free-field condition). This value can be rated by the manufacturer according to the particular application under the condition that the device can pass a 100 h test using this stimulus without getting damaged. The SPLmax according to IEC 60268-21 is not only a meaningful characteristic for the end-user, marketing and product development but is also required for calibrating analogue or digital stimuli used for testing modern loudspeaker systems having a wireless input and internal signal processing. The workshop gives an overview on related standards (e.g., CEA 2010) and shows practical ways how to rate a meaningful SPLmax value giving the best sound quality, sufficient reliability, and robustness for the particular application. The methods are demonstrated on passive transducers and active (Bluetooth) speakers.

AES Technical Council This session is presented in association with the AES Technical Committee on Loudspeakers and Headphones

 
 

Thursday, March 21, 09:30 — 10:30 (Liffey A)

Photo

T11 - Uncovering the Acoustic Treasures of Cathedrals: The Use of Acoustic Measurements and Computer Modeling to Preserve Intangible Heritage

Presenter:
Lidia Alvarez Morales, Department of Theatre, Film and Television, University of York - York, North Yorkshire, UK

The last decades have seen an increase in interest in the study and preservation of the acoustics of worship sites considering their sound as part of their intangible heritage. This tutorial delves deep into the concepts and the methodology needed to characterize the acoustic field of cathedrals, which are an essential part of Europe’s cultural, architectural, and artistic heritage. Its aim is to serve as a guideline on how measurement techniques are applied to register monaural, binaural, and ambisonic room impulse responses, as well as on how simulation techniques can be used to assess the influence of occupancy or to recreate their acoustic field throughout history.

This tutorial is linked to three funded projects: two Spanish national projects on the Acoustics and Virtual Reality in Spanish Cathedrals, and the Marie Sklodowska-Curie Fellowship “Cathedral Acoustics,” which highlights the importance of acoustics as an essential element of the intangible heritage of cathedrals, defying the traditional focus on visual heritage.

 
 

Thursday, March 21, 10:15 — 11:15 (Liffey Hall 2)

T12 - Overview of Advances in Magnetic Playback

Presenters:
John Chester, Plangent Processes - Nantucket, MA, USA
Jamie Howarth, Plangent Processes - Nantucket, MA, USA

A comprehensive overview of recent advancements in magnetic tape playback, focusing on the minimization of time domain errors ranging from drift, wow, and flutter to phase response. Discussion of standards and measurements and the use of record bias to provide a time reference for time base correction and for precise azimuth adjustment. The advantages of more accurate digital de-emphasis and the implementation of phase equalizers compensate for such errors in the recording process, including modeling in DSP the frequency response and phase characteristics of well-known recorders. We conclude by considering the ethics of preserving artifacts on the recording which can be of service to future restorations. Examples of antique and recent wire, film, and tape recordings will be presented.

 
 

Thursday, March 21, 11:00 — 12:30 (Liffey A)

Photo

T13 - Recording the Orchestra—Where to Start?

Presenter:
Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada

With a focus on the orchestra as a complex sound source, this tutorial will cover the topics of how to listen, understanding microphones, concert halls and orchestral seating arrangements. How to set up the monitoring environment, and how to approach each section of the orchestra as part of the recording process. Concertos, Vocal and Orchestra, and live concert recording will be discussed. Audio examples will be used to support the discussion.

 
 

Thursday, March 21, 11:30 — 12:30 (Liffey Hall 2)

Photo

T14 - The Sound of War: Capturing Sounds in Conflict Zones

Presenter:
Ana Monte, DELTA Soundworks - Germany

In field recording, preparation is key—but when preparing to record in a war zone, where do you even start? Ana Monte shares her experiences working on Picturing War, a Film Academy Baden-Wüttemberg student documentary directed by Konstantin Flemig.

She and the production team follow journalist and photographer Benjamin Hiller as he captures images of a YPJ all-female fighter unit, a refugee camp in Erbil, the Murambi Genocide Memorial in Rwanda, and Kurdish soldiers fighting in Northern Iraq.

 
 

Thursday, March 21, 12:45 — 14:15 (Liffey A)

T15 - Recording Irish Music

Presenter:
Sean Davies, S.W. Davies Ltd.

This session traces the recording activity that took place at first in the USA, where immigrant Irish musicians found good work playing to the large Irish communities in America. Recordings had a ready market (of course on 78-rpm discs), many of which were also pressed in UK and Ireland and that are often the only extant evidence of some legendary artists. We will play some of these, including the Sligo fiddler Michael Coleman, Leo Rowsome’s pipe band, and later recordings of, e.g., Michael Gorman, the Donegal fiddler John Doherty, and the family band of the McPeakes. Additionally we must remember that the classical music was enriched by John Field, who invented the “Nocturne” later popularized by Chopin. On the vocal front we cannot ignore Count John McCormack plus the many balladeers of popular songs. It may be only a slight exaggeration to say that preservation of the Irish Music tradition would be justification enough for the invention of recording.

 
 

Thursday, March 21, 12:45 — 14:15 (Liffey Hall 2)

Photo

T16 - Psychoacoustics of 3D Sound Recording and Reproduction (with 9.1 demos)

Presenter:
Hyunkook Lee, University of Huddersfield - Huddersfield, UK

3D multichannel audio aims to produce an immersive auditory experience by adding the height dimension to the reproduced sound field. In order to use the added height channels most effectively, it is necessary to understand the fundamental psychoacoustic mechanisms of vertical stereophonic perception. This tutorial/demo session will provide sound engineers and researchers with an overview of important psychoacoustic principles to consider for 3D audio recording and reproduction. The talk will first introduce recent research findings on topics such as 3D sound localization, phantom elevation, vertical image spread, and 3D listener envelopment. It will then show how the research has been used to develop new 3D/VR microphone array techniques, 2D–3D upmixing techniques and a new 3D panning technique without using height channels. The talk will be accompanied by 9.1 recording demos of various types of music, including orchestra, choir, organ, EDM, etc.

 
 

Thursday, March 21, 12:45 — 14:15 (Liffey Hall 1)

Photo

T17 - Almost Everything You Ever Wanted to Know About Loudspeaker Design

Presenter:
Christopher Struck, CJS Labs - San Francisco, CA, USA; Acoustical Society of America

This tutorial will walk the audience through an entire loudspeaker design as well as introducing the basic concepts of loudspeakers. Equivalent circuits, impedance and Thiele-Small Parameters are shown. Inherent driver nonlinearities are explained. The effects of modal behavior and cone breakup are demonstrated. Closed Box and Ported Box systems are analyzed and several design examples are meticulously worked through, both with hand calculations and using CAD. Issues with multiple drivers and cabinet construction are discussed. Directivity and diffraction effects are illustrated. Crossover network design fundamentals are presented, with a specific design example for the previously shown ported enclosure design.

AES Technical Council This session is presented in association with the AES Technical Committee on Loudspeakers and Headphones

 
 

Thursday, March 21, 14:30 — 15:30 (Liffey Hall 2)

Photo

T18 - Case Studies in Jazz and Pop/Rock Music Production for 3D Audio

Presenter:
Will Howie, CBC/Radio-Canada - Vancouver, Canada

Music recording and mixing techniques for 3D audio reproduction systems will be discussed through several case studies covering jazz, pop/rock, and new music genres. Complex multi-microphone arrays designed for capturing highly realistic direct sound images are combined with spaced ambience arrays to reproduce a complete sound scene. Mixing techniques are aesthetically and technically optimized for 3D reproduction. Initially developed for use with large-scale loudspeaker-based formats, such as 22.2 Multichannel Sound (9+10+3), it will be shown how these sound capture techniques can be scaled to smaller, more common standardized channel-based formats, such as 4+5+0 and 4+7+0. Numerous corresponding audio examples have been prepared for 9.1 (4+5+0) reproduction. The focus will be on practical, aesthetic, and technical considerations for 3D music recording.

 
 

Thursday, March 21, 16:15 — 17:30 (Liffey Hall 1)

Photo

T19 - Practical Deep Learning Introduction for Audio Processing Engineers

Presenter:
Gabriele Bunkheila, MathWorks - Madrid, Spain

Are you an audio engineer working on product development or DSP algorithms and willing to integrate AI capabilities within your projects? In this session we will walk through a simple Deep Learning example for speech classification. We will use MATLAB code and a speech command dataset made available by Google. We will cover creating and accessing labeled data, using time-frequency transformations, extracting features, designing and training deep neural network architectures, and testing prototypes on real-time audio. We will also discuss working with other popular Deep Learning tools, including exploiting available pre-trained networks.

AES Technical Council This session is presented in association with the AES Technical Committee on Semantic Audio Analysis

 
 

Thursday, March 21, 16:30 — 18:00 (Liffey A)

T20 - Software Complements Loudspeaker Hardware

Presenters:
Wolfgang Klippel, Klippel GmbH - Dresden, Germany
Joachim Schlechter, Klippel GmbH - Dresden, Germany

Digital signal processing, amplification, and the electroacoustical conversion converge to one active transducer, providing more sound output at higher quality by using less size, weight, energy, and manufacturing cost. Adaptive nonlinear control technology based on monitoring the electric input current cancels undesired signal distortion, actively protects the transducer against mechanical and electrical overload, and stabilizes the voice coil at the optimum rest position to ensure maximum excursion over product life. This control software opens new degrees of freedom for the passive transducer and system development. The tutorial presents a new design concept that optimizes efficiency and voltage sensitivity while using a minimum of hardware resources. The design steps are illustrated through practical examples using new simulation, measurement, and diagnostic tools that analyze the performance of the audio device with nonlinear control.

AES Technical Council This session is presented in association with the AES Technical Committee on Loudspeakers and Headphones

 
 

Friday, March 22, 09:00 — 10:00 (Liffey Hall 1)

Photo

T21 - What's between Linear Movies and Video Games?

Presenter:
Lidwine Ho, France televisions - Paris, France

360° video is increasingly used in many areas. Broadcasters whose video is the core business cannot ignore it. The question that they have to seize now is: how to reinvent television through new narrative formats without losing their identity and their objectives.

Former television aims to reconstruct reality by editing, recreating emotions with music while the 360° video shows a sequence in which the user can explore the universe. Beyond the technical question of the tools and formats we used, we will explain, step by step what keys we used to build a coherent and immersive sound universe while guiding the user in his axes of vision.

The other question we will consider is: what is the next step? Do we have to build video games to reach our objectives? Is there a place between linear movies and video game for broadcasters? Which tools or standards do we need?

 
 

Friday, March 22, 09:00 — 10:00 (Liffey Hall 2)

Photo

T22 - Ambisonics or Not?—Comparison of Techniques for 3D Ambience Recording

Presenter:
Felix Andriessens, Ton und Meister - Germany

With the spreading of head-tracked audio in games and VR-applications and 3D-sound systems for cinema like Dolby Atmos and Auro3D on the rise, the demand for 3D-audio recordings and recording setups keeps growing. This event shows a comparison between different recording techniques from coincident to spaced setups and discusses whether and where Ambisonic microphones are the best option for field recordings, especially considering ambience recordings.

 
 

Friday, March 22, 10:15 — 11:15 (Liffey Hall 1)

T23 - Live Production of Binaural Mixes of Classical Music Concerts

Presenters:
Matt Firth, BBC Research & Development - Salford, UK
Tom Parnell, BBC Research & Development - Salford, UK

In recent years the BBC has produced binaural mixes of the BBC Proms, a series of classical music concerts, and has made them available to the public online. In this workshop BBC engineers will present these binaural mixes and discuss the approach used in their production; this includes custom production tools for a live broadcast environment and the training of sound engineers in spatial audio mixing. There will be an opportunity to listen to these binaural mixes on headphones.

 
 

Friday, March 22, 11:30 — 12:45 (Liffey Hall 1)

Photo

T24 - Microphone Techniques for Live Music Productions (45´)

Presenter:
Cesar Lamschtein, Kaps Audio Production Services - Montevideo, Uruguay; Mixymaster - Montevideo, Uruguay

It is quite common to be involved in live music production where musical instruments with a significant difference in the sound level they produce share the same stage. Under those circumstances, natural acoustic musical balance is lost or cannot exist.

This matter together with instrument distribution within a stage design that is ruled by visual guidelines (video or audience) rather than musical/acoustic concerns jeopardizes the signals we get from the microphones. Signal to noise ratio falls as noise only rises (bad bleed from other instruments, stage monitoring, stage noise, ambience noise, etc.). This poor S/N ratio may lead to a loss of clarity and continuity in the program mix and also may impeach some signal processing that may be necessary during postproduction.

We will discuss techniques borrowed from pop music production, which when applied to acoustic instruments lead to a better control of these parameters through the production process that helps not only in live music production mixing but also in acoustic music sound reinforcement situations.

 
 

Friday, March 22, 12:45 — 14:15 (Liffey A)

T25 - On-Location/Surround Classical Recording for Broadcast with Central Sound at Arizona PBS

Presenter:
Alex Kosiorek, Central Sound at Arizona PBS - Phoenix, AZ, USA

Live classical recording for broadcast distribution over a variety of delivery systems can have challenges, even more-so when they are broadcast in surround. Among the concerns can be maintaining a consistent aesthetic across broadcasts, dealing with varying venue acoustics, discretion in microphone placement for minimal visual interference and/or performer/patron access, redundancy, codec, and more. Central Sound at Arizona PBS, a premier multi-award-winning provider of audio-media services, has dealt with many of these challenges, producing over 120 live classical and acoustic music productions annually for local, national and international broadcast. Whether radio, television, on-demand and/or for “live” streaming/broadcast, over 50% of the events are in surround (and soon surround w/height). Manager of Central Sound and Executive Producer of Classical Arizona PBS, Alex Kosiorek, will include microphone techniques and practical solutions that result in high-quality productions. Audio examples will be included.

 
 

Friday, March 22, 14:30 — 15:30 (Liffey Hall 1)

Photo

T26 - Sound for Extreme 360° Productions

Presenter:
Martin Rieger, VRTONUNG - Munich, Germany

The workshop shows various examples of 360-degree video productions under challenging conditions, featuring location recordings and post-production.

The purpose of the talk is to give practical insights of immersive VR-videos and how sound on vision needs to be contemplated, which varies a lot from usual film formats and requires a lot of knowledge additional to audio as such. Different technologies and sometimes even custom solutions are needed on set and in post. There is no use for a boom microphone and its operator, which gets replaced by an immersive microphone array which there is, just like for 360° cameras, no perfect setup for every occasion as people tend to claim that there is.

 
 

Friday, March 22, 15:45 — 16:45 (Liffey Hall 1)

Photo

T27 - Binaural Multitracking with Binaural Directional Convolution Reverb

Presenter:
Matthew Lien, Whispering Willows Records Inc. - Whitehorse, Yukon, Canada; Universal Music Publishing - Taipei, Taiwan

With the majority of today’s music delivered over headphones, the time for binaural audio has arrived. But producing binaural music has presented challenges with some producers declaring native binaural unsuitable for popular music production. Binaural tracking often yielded poor results (especially with drums) compared to non-binaural multitracking, and the reliance on recorded room ambiance was not ideal and didn’t allow for further wet/dry adjustment/enhancements when mixing.

However, recent research and new recording techniques, and the use of Directional Binaural Convolution Reverb (described in eBrief 277 "Space Explorations—Broadening Binaural Horizons with Directionally-Matched Impulse Response Convolution Reverb") are yielding impressive binaural multitracking results, even with mainstream music styles.

After gathering an extensive collection of directional binaural impulse responses from churches and halls across Europe, Asia, and Canada for the creation of directional binaural convolution reverb; and following experimental "native binaural" recording studio sessions, a new approach to native binaural multitracking has been established.

This tutorial demonstrates the various stages of this new approach to binaural music production with video and audio samples, including early and late reflection directional binaural impulse response capturing (and how each is used for best results), native binaural studio tracking of instruments ranging from drums and bass, to strings, to marimbas, to Hammond B3 and more; to mixing and mastering.

 
 

Friday, March 22, 17:00 — 18:00 (Liffey Hall 1)

Photo

T28 - Creating Immersive Spatial Audio for Cinematic VR

Presenter:
Marcin Gorzel, Google Inc. - Dublin, Ireland; YouTube - Mountain View, USA

In this tutorial we will focus on theory and practice for creating immersive spatial audio for cinematic VR experiences. No knowledge of spatial audio is assumed, however experience in audio technology or sound design would be helpful to understand some concepts covered in this talk.

After completing this tutorial, you should have a basic understanding of theory behind spatial audio capture, manipulation, and reproduction as well as practical knowledge that will allow you to create and deploy spatial audio on YouTube. We will cover topics like: what immersive spatial audio is and how it conceptually differs from traditional multichannel stereo audio; what is Ambisonics and the notion of sound field reproduction (theory behind spatial audio); what are spatial audio recording and post-production techniques.

We will also cover some YouTube-specific topics and workflows, including preparing and uploading spatial audio to YouTube, spatial audio formats and open source codecs supported by YouTube. You’ll learn how spatial audio works on YouTube mobile (Android/iOS) and desktop platforms or audio DSP algorithms used by YouTube to render interactive binaural audio. Lastly, we will give some insights into spatial audio loudness and quality considerations as well as propose some good practices for creating spatial audio content.

 
 

Saturday, March 23, 09:00 — 10:30 (Liffey A)

Photo

T29 - Audio Mastering's Core Values and the Influence of Modern Technology

Presenter:
Jonathan Wyner, M Works Studios/iZotope/Berklee College of Music - Boston, MA, USA; M Works Mastering

There are certain core principles and perhaps “ethics” rolled into traditional mastering practice. As technology changes we accrue opportunities to do things better and challenges when the principles are diluted or obscured. This event will highlight examples of each and offer areas of inquiry worth undertaking for anyone looking to engage in audio mastering. The end of the workshop will examine the influence of technology in recording and audio distribution on the aesthetic of mastered audio, looking ahead to changes that may be on the near horizon.

 
 

Saturday, March 23, 10:00 — 11:00 (Liffey Hall 2)

Photo

T30 - Creating Sound Effects Libraries

Presenter:
Benjamin-Saro Sahihi, SoundBits - Germany

You can hear them in action movies, commercials, documentaries, or video games. They can tell and support a story, define a character, make a world authentic or provide important acoustic feedback on actions or notifications. We’re talking about Sound Effects. There are many, many cases where it’s impossible for a Sound Designer / Sound Editor to record/edit their own sound effects for a certain project. Mostly this is because of, e.g., lack of budget, time, equipment, or no access to specific props or typical sounds of foreign countries. That is where huge general sound effects libraries and independent sound effect pack creators come into play. In this event professional sound designer and sound effects manufacturer Saro Sahihi, founder of SoundBits (www.soundbits.de) examines the workflows, tools, and specific challenges to craft sounds from scratch for sound effects libraries.

 
 

Saturday, March 23, 10:45 — 12:15 (Liffey Hall 1)

T31 - Understanding Line Source Behavior for Better Optimization

Presenter:
François Montignies, L-Acoustics - Marcoussis, France

The key to a good loudspeaker system design is the balance between coverage, SPL and frequency response performances. It deals with various challenges such as directivity control, auditory health preservation and sonic homogeneity.
In solutions based on a variable curvature line source, the parameters linked to its physical deployment are often overlooked. The temptation to rely on electronic processing to fix resulting problems may then arise. However, it always compromises other performances to some extent, whether system headroom or wavefront integrity.
Using Fresnel analysis, this tutorial points at important aspects of line source behavior and identify the effect of determinant parameters, such as inter-element angles. It shows how an optimized physical deployment allows for rational electronic adjustments, which just become the icing on the cake.

AES Technical Council This session is presented in association with the AES Technical Committee on Acoustics and Sound Reinforcement

 
 

Saturday, March 23, 11:15 — 12:45 (Liffey Hall 2)

Photo

T32 - Emotive Sound Design in Theory

Presenter:
Thomas Görne, Hamburg University of Applied Sciences - Hamburg, Germany

The tutorial explores film sound design from the viewpoints of perception, psychology, and communication science starting with the auditory and audiovisual object. A special focus is set on communication through crossmodal correspondences of auditory perception expressed in metaphors like height, brightness or size, as well as on the semantics of sound symbols, on the emotional impact of ambiguous objects, on inattentional deafness due to the limited bandwidth of conscious perception (i.e., the "inaudible gorilla"), and on image-sound relationships. Application of these principles lead to sound design concepts like realism / naturalism, the modern attention guiding "hyperrealistic" approach, and more experimental approaches like expressionism and impressionism.

 
 

Saturday, March 23, 12:45 — 14:15 (Liffey Hall 2)

Photo

T33 - Emotive Sound Design in Practice

Presenter:
Anna Bertmark, Attic Sound Ltd. - Brighton, UK

How can sound give a sense of emotional perspective in cinema? Putting the audience in the shoes of a character in a story can have a powerful impact on empathy and immersion. Sound designer Anna Bertmark gives an insight into her work and the techniques used to depict the point of view of characters. Using clips from films such as God’s Own Country (for which she won the 2017 BIFA for Best Sound), Adult Life Skills, and The Goob, among others, she will talk about the reasons and ideas behind these techniques, from contact mic recording through to the final result as heard in the cinema.

 
 

Saturday, March 23, 13:45 — 15:15 (Liffey A)

Photo

T34 - Mix Kitchen: Mixing Tips and Tricks for Modern Music Genres

Presenter:
Marek Walaszek, Bettermaker - Warsaw, Poland

In today’s modern genres of pop, hip hop, and EDM there is a fine line between production, mixing, and mastering. It is not uncommon that at least two of above are done simultaneously in music creation process. Marek “Maro” Walaszek will take an in-depth look into today’s mixing and mastering techniques and how they affect sound.

 
 

Saturday, March 23, 14:00 — 14:45 (Liffey Hall 1)

Photo

T35 - Noise Predictions with Sound Systems Using System Data & Complex Summation (Implemented in SoundPLAN & NoizCalc)

Presenter:
Daniel Belcher, d&b audiotechnik - Backnang, Germany

As the number of out-door events in urban areas is increasing, so are the challenges with and the awareness of the accompanied noise emission and therefore the need for accurate noise predictions. A new method for such noise predictions with sound reinforcement systems for outdoor events is introduced that uses system data of actual system designs and applies complex summation.

The system data including all electronic filters is simply imported with a system design file. This procedure eliminates any friction losses because it spares the repeated process of re-modeling a sound system in the noise prediction software and therefore ensures that the prediction is done with the actual system design. It also is the only sensible way to include the specific electronic filters (also IIR, FIR) of a sound system.

Noise prediction software did not consider complex summation simply because noise sources are always assumed not to be correlated with each other (e.g., in traffic and industry). This is where sound systems are quite different because the signals sent to the loudspeakers at different positions are usually always correlated. Furthermore, modern sound system even use coherence effects to influence directivity (e.g., arrays).

 
 

Saturday, March 23, 15:00 — 16:30 (Liffey Hall 1)

Photo

T36 - Modern Sampling Part 2: Sparse/Compressive Sampling/Sensing

Presenter:
Jamie Angus-Whiteoak, University of Salford - Salford, Greater Manchester, UK; JASA Consultancy - York, UK

Sparse and Compressive Sampling allows one to sample the signal at apparently less than the Nyquist/Shannon limit of two times the highest frequency, without losing any signal fidelity. If some loss of fidelity is allowed the signal can be sampled at an even lower average rate. How can this be? The answer is that the effective information rate is actually lower than the highest frequency.

The purpose of this tutorial is to give a (mostly) non-mathematical introduction to Sparse/Compressive Sampling. We will examine the difference between “Sparse” and “Dense” signals and define what is meant by “rate of innovation” and see how it relates to sample rate. We will then go on to see how we can create sparse signals either via transforms or filters to provide signals that can be sample at much lower rates. We will then show how some of these methods are already used in audio, and suggest other areas of application, such as measurement. Finally we will finish off by showing how a commonly used audio system can be considered to be a form of compressive sensing.

AES Technical Council This session is presented in association with the AES Technical Committee on High Resolution Audio

 
 


Return to Tutorials