144th AES CONVENTION Tutorial Details

AES Milan 2018
Tutorial Details

Wednesday, May 23, 09:15 — 10:15 (Scala 2)


T01 - Crash Course in 3D Audio

Nuno Fonseca, Polytechnic Institute of Leiria - Leiria, Portugal; Sound Particles

A little confused with all the new 3D formats out there? Although most 3D audio concepts already exist for decades, the interest in 3D audio has increased in recent years, with the new immersive formats for cinema or the rebirth of Virtual Reality (VR). This tutorial will present the most common 3D audio concepts, formats, and technologies allowing you to finally understand buzzwords like Ambisonics/HOA, Binaural, HRTF/HRIR, channel-based audio, object-based audio, Dolby Atmos, among others.


Wednesday, May 23, 09:30 — 11:00 (Scala 1)

T02 - Reusing and Prototyping to Accelerate Innovation in Audio Signal Processing

Gabriele Bunkheila, MathWorks - Cambridge, UK
Jonas Rutstrom, MathWorks - Sollentuna, Sweden

Voice assistants are shifting consumer expectations on performance and capabilities of audio devices and human-machine interfaces. As new products are driven to deliver increasingly complex features, successful manufacturers and IP providers need to reuse more design assets, deliver original innovation more efficiently, and prototype more quickly than ever before. In this session you will learn about different techniques to integrate existing code and IP into early simulations of algorithms and system designs, ranging from embeddable code to cloud-based services. You will also be exposed to quick prototyping workflows, including methods for running in real-time and validating ideas on live real-world signals. The presentation will go through practical worked examples using MATLAB, while discussing some early-stage challenges in the design of voice-driven connected devices.


Wednesday, May 23, 10:15 — 11:15 (Scala 3)


T03 - Perceptually Motivated Filter Design with Applications to Loudspeaker-Room Equalization

Balázs Bank, Budapest University of Technology and Economics - Budapest, Hungary

Digital filters are often used to model or equalize acoustic or electroacoustic transfer functions. Applications include headphone, loudspeaker, and room equalization, or modeling the radiation of musical instruments for sound synthesis. As the final judge of quality is the human ear, filter design should take into account the quasi-logarithmic frequency resolution of the auditory system. This tutorial presents various approaches for achieving this goal, including warped FIR and IIR, Kautz, and fixed-pole parallel filters, and discusses their differences and similarities. It also shows their relation to fractional-octave smoothing, a method used for displaying transfer functions. With a better allocation of frequency resolution, these methods require a significantly lower computational power compared to straightforward FIR and IIR designs at a given sound quality.

AES Technical Council This session is presented in association with the AES Technical Committee on Loudspeakers and Headphones and AES Technical Committee on Signal Processing


Wednesday, May 23, 11:15 — 12:15 (Scala 1)


T04 - Build a Synth for Android

Don Turner, Developer Advocate, Android Audio Framework - UK

With 2 billion users Android is the world's most popular operating system, and it can be a great platform for musical creativity. In this session Don Turner (Developer Advocate for the Android Audio Framework) will build a synthesizer app from scratch* on Android. He'll demonstrate methods for obtaining the best performance from the widest range of devices, and how to take advantage of the new breed of low latency Android "pro audio" devices. The app will be written in C and C++ using the Android NDK APIs.
*Some DSP code may be copy/pasted

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Games


Wednesday, May 23, 14:15 — 16:15 (Scala 1)


T05 - Tune Your Studio-Acoustic Analysis of Small Rooms for Music

Lorenzo Rizzi, Suono e Vita - Acoustic Engineering - Lecco, Italy

The tutorial will talk about acoustic quality issues in small rooms for music. Focusing on internal acoustic quality (time response, frequency response, acoustic parameters) it will be rich in practical examples, useful for sound engineers, musicians and acousticians to focus on the small-room issues.


Wednesday, May 23, 15:00 — 16:30 (Lobby)


T06 - Psychoacoustics of 3D Sound Recording (with 9.1 Demos)

Hyunkook Lee, University of Huddersfield - Huddersfield, UK

3D surround audio formats aim to produce an immersive soundfield in reproduction utilizing elevated loudspeakers. In order to use the added height channels most optimally in sound recording and reproduction, it is necessary to understand the psychoacoustic mechanisms of vertical stereophonic perception. From this background, this tutorial/demo session aims to provide an overview of important psychoacoustic principles that recording engineers and spatial audio researchers need to consider when making a 3D recording using a microphone array. Various microphone array techniques and practical workflows for 3D sound capture will also be introduced and their pros and cons will be discussed. This session will play demos of various 9.1 sound recordings, including the recent Auro-3D and Dolby Atmos release for the Siglo de Oro choir.


Wednesday, May 23, 16:30 — 18:00 (Scala 1)


T07 - Benefiting from New Loudspeaker Standards

Wolfgang Klippel, Klippel GmbH - Dresden, Germany

This tutorial focuses on the development of new IEC standards, addressing conventional and modern measurement techniques applicable to all kinds of transducers, active and passive loudspeakers and other sound reproduction systems. The first proposed standard (IEC 60268-21) describes important acoustical measurements for evaluating the generated sound field and signal distortion. The second standard (IEC 60268-22) is dedicated to the measurement of electrical and mechanical state variables (e.g. displacement), the identification of lumped and distributed parameters (e.g. T/S) and long-term testing to assess power handling, thermal capabilities, product reliability and climate impact. The third standard(IEC 63034) addresses the particularities of micro-speakers used in mobile and other personal audio devices. The tutorial gives a deeper insight into the background, theory and practical know-how behind those standards.

AES Technical Council This session is presented in association with the AES Technical Committee on Loudspeakers and Headphones


Wednesday, May 23, 16:30 — 18:00 (Scala 3)


T08 - Modern Sampling: It’s Not About the Sampling; It’s About the Reconstruction!

Jamie Angus, University of Salford - Salford, Greater Manchester, UK; JASA Consultancy - York, UK

Sampling, and sample rate conversion, are critical processes in digital audio. The analogue signal must be sampled, so that it can be quantized into a digital word. If these processes go wrong, the original signal will be irretrievably damaged!
1. Does sampling affect the audio?
2. Can we reconstruct audio after sampling?
3. Does sampling affect the timing, or distort the music?
4. Can modern sampling techniques improve things?
This tutorial will look at the modern theories of sampling, and explain, in a non-mathematical way, how these modern techniques can improve the sampling and reconstruction of audio.
Using audio examples, it will show that sampled audio, when properly reconstructed, preserves all of the original signal. Because it’s not the sampling but the reconstruction that matters!

AES Technical Council This session is presented in association with the AES Technical Committee on High Resolution Audio


Wednesday, May 23, 16:30 — 17:30 (Scala 2)


T09 - AES67 & ST2110 - An Overview

Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany

In September 2017, the new ST2110 standard on "Professional Media over
Managed IP Networks" was published by SMPTE. What is ST2110 covering, how does it relate to AES67, and what is the practical impact to the audio industry? This tutorial describes the basic principles and the commonalities and differences of ST2110 & AES67 differences, and eludes on the constraints defined in ST2110 with respect to AES67. It also includes a brief outlook on how transport of non-linear audio formats (AES3) will be defined in ST2110.


Thursday, May 24, 09:30 — 10:30 (Scala 1)

T10 - From Seeing to Hearing: Sound Design and Spatialization for Visually Impaired Film Audiences

Gavin Kearney, University of York - York, UK
Mariana Lopez, University of York - York, UK

This tutorial presents the concepts, processes, and results linked to the Enhancing Audio Description project (funded by AHRC, UK), which seeks to provide accessible audio-visual experiences to visually impaired audiences using sound design techniques and spatialization. Film grammars have been developed throughout film history, but such languages have matured with sighted audiences in mind and assuming that seeing is more important than hearing. We will challenge such assumptions by demonstrating how sound effects, first person narration as well as breaking the rules of sound mixing, can allow us to create accessible versions of films that are true to the filmmaker's conception. We will also discuss how the guidelines developed have been applied to the higher education context to train filmmakers on the importance of sound.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema and AES Technical Committee on Spatial Audio


Thursday, May 24, 10:30 — 12:00 (Arena 3 & 4)


T11 - Total Timbre: Tools and Techniques for Tweaking and Transforming Tone

Alex Case, University of Massachusetts Lowell - Lowell, MA, USA

Recordists shape timbre through the coordinated use of several different signal processors. While equalization is a great starting point, the greatest tonal flexibility comes from strategic use of additional timbre-modifying signal processors: compression, delay, reverb, distortion, and pitch shift. This tutorial defines the timbre-driving possibilities of the full set of studio effects, connecting key FX parameters to their relevant timbral properties, with audio examples that reveal the results. This multi-effect approach to timbre enables you to extract more from the effects you already use, and empowers you to get the exact tones you want.


Thursday, May 24, 10:45 — 12:15 (Scala 1)


T12 - Acoustic Enhancement Systems

Ben Kok, BEN KOK - acoustic consulting - Uden, The Netherlands

Acoustic enhancement systems can be considered the ultimate in electro acoustics, as these systems influence the perceived natural acoustics of a given space. Due its complex nature-and the interaction with subjective perception-these systems often are considered mystic or even a "black art." This tutorial will focus on how subjective and objective room acoustic qualities and parameters can be influenced with use of acoustic enhancement and how this relates to the requirements and design criteria for such a system.

In the presentation it is assumed that the attendees already have a basic knowledge of auditoria acoustics and acoustic enhancement systems, or at least have read the feature on acoustic enhancement by Francis Rumsey in JAES, Vol 26, No.6, 2014 June.

AES Technical Council This session is presented in association with the AES Technical Committee on Acoustics and Sound Reinforcement


Thursday, May 24, 10:45 — 12:15 (Scala 3)


T13 - Perceptual and Physical Evaluation of Guitar Loudspeakers

Wolfgang Klippel, Klippel GmbH - Dresden, Germany

Loudspeaker and headphones generate distortion in the reproduced sound that can be assessed by objective measurements based on a physical or perceptual model or just by a subjective evaluation performed in systematic listening tests. This tutorial gives an overview on the various techniques and discusses the way how measurement and listening can be combined by auralization techniques to give a more reliable and comprehensive picture on the quality of the sound reproduction. The separation of signal distortion in speech signals allows to assess signal distortion which are for most stimuli below the audibility threshold. Further analysis of the separated distortion signals gives clues to identify the physical root cause of loudspeaker defects which is very crucial for fixing design problems.


Thursday, May 24, 12:30 — 13:15 (Lobby)


T14 - Music Is the Universal Language

Fei Yu, Dream Studios - China; U.S

Music supervisor and music producer Fei Yu works on Chinese video games and movies for the last 8 years. During the work collaborating with Western composers, she has her unique perspective on this new form of international collaboration.

This event will discuss the critically acclaimed score to NetEase Games’ "Revelation," the popular Chinese fantasy MMORPG that will soon be released internationally. As one of the first Chinese games to be scored by a westerner for the emerging Chinese video game market, we will show, by using musical examples, a way to work on this Chinese project. We will also talk about the recording techniques for projects like the movie Born In China that uses Chinese instruments—how to balance the sound, how to communicate with composer, etc.


Thursday, May 24, 14:00 — 15:30 (Scala 3)


T15 - Listening Tests - Understanding the Basic Concepts

Jan Berg, Luleå University of Technology - Piteå, Sweden

Listening tests are important tools for audio professionals as they assist our understanding of audio quality. There are numerous examples of tests, either formally recommended and widely used or specially devised for a single occasion. In order to understand listening tests and related methods, and also to potentially design and fully benefit from their results, some basic knowledge is required. This tutorial aims to address audio professionals without prior knowledge of listening test design and evaluation. The fundamentals of what to ask for, how to do it, whom to engage as listeners, what sort of results that may be expected, and similar issues will be covered.

AES Technical Council This session is presented in association with the AES Technical Committee on Perception and Subjective Evaluation of Audio Signals


Thursday, May 24, 16:45 — 17:45 (Scala 1)


T16 - Audio Localization Method for VR Application

Joo Won Park, Columbia University - New York, NY, USA

Audio localization is a crucial component in the Virtual Reality (VR) projects as it contributes to a more realistic VR experience to the users. In this event a method to implement localized audio that is synced with user’s head movement is discussed. The goal is to process an audio signal real-time to represent three-dimensional soundscape. This tutorial introduces a mathematical concept, acoustic models, and audio processing that can be applied for general VR audio development. It also provides a detailed overview of an Oculus Rift- MAX/MSP demo that conceptualizes the techniques.


Thursday, May 24, 17:00 — 18:00 (Scala 3)


T17 - Use of Delay-Free IIR Filters in Musical Sound Synthesis and Audio Effects Processing

Federico Fontana, University of Udine - Udine, Italy

The delay-free loop problem appears when an audio electronic system, typically an analog processor, is transformed in the digital domain by means of a filter network preserving the structural connections among nonlinear components. If such connections include delay-free loopbacks then there is no explicit procedure allowing for the computation of the corresponding digital filter output.

The tutorial will show how a delay-free IIR filter network is designed, realized, and finally computed; and why they have led to successful real-time digital versions of the Dolby B, the Moog and EMS VCS3 voltage-controlled filter, as well as nonlinear oscillators and RLC networks, magnitude-complementary parametric equalizers, and finite-difference time-domain scheme-based models of membranes characterized by low wave dispersion.

AES Technical Council This session is presented in association with the AES Technical Committee on Signal Processing


Friday, May 25, 10:30 — 12:00 (Scala 1)

T18 - The Sound Will Take a Lead in VR

Daniel Deboy, DELTA Soundworks - Germany
Ana Monte, DELTA Soundworks - Germany
Szymon Aleksander Piotrowski, Psychosound Studio - Kraków, Poland
Michal Sokolowski, Slavic Legacy VR - Warsaw, Poland
Paulina Shepanic, Slavic Legacy VR - Warsaw, Poland

This session focuses on details of the work on two games in VR. This innovative platform introduces not only new possibilities and perspectives but also new challenges and questions.

Game Game ““Slavic Legacy VR” - First and the largest VR game based on mysterious Slavic mythology and culture. Through virtual reality, the player immerses himself into a dark story in Eastern Europe. From the very beginning, he has to sneak and hide from a deadly threat and its consequences. The unique fairytale atmosphere and uncommon in VR, award-winning graphics, is a mixture of realism and artistic vision of the creators. All of the tools, elements of the environment and encountered mythical beasts were created based on ethnographic research so that they faithfully reflect the life and beliefs of the then people. The game also confronts the problem of motion sickness with a completely new way of moving and designing the locations. These and many other features make Slavic Legacy a valued, one-of-a-kind and immersive production for the educational and entertainment market. Authors will present a gameplay with binaural playback and discuss technical, perception, and esthetic aspects of production process and choices. Panelists will also bring up challenges of sounds implementation in Unreal Engine.

Game “The Stanford Virtual Heart”
Pediatric cardiologists at Lucile Packard Children's Hospital Stanford are using immersive virtual reality technology to explain complex congenital heart defects, which are some of the most difficult medical conditions to teach and understand. The Stanford Virtual Heart experience helps families understand their child’s heart conditions by employing a new kind of interactive visualization that goes far beyond diagrams, plastic models and hand-drawn sketches. For medical trainees, it provides an immersive and engaging new way to learn about the two dozen most common and complex congenital heart anomalies by allowing them to inspect and manipulate the affected heart, walk around inside it to see how the blood is flowing, and watch how a particular defect interferes with the heart’s normal function. The panelists will give an insight about the challenges for the sound design and how it was integrated in Unity.


Friday, May 25, 12:45 — 13:45 (Lobby)

T19 - Supporting the Story with Sound—Audio for Film and Animation

Kris Górski, AudioPlanet - Koleczkowo, Poland; Technology University of Gdansk
Kyle P. Snyder, Ohio University, School of Media Arts & Studies - Athens, OH, USA

Storytelling is a primary goal of film and there's no better way to ruin the story than with bad sound. This session will focus on current workflows, best practices, and techniques for film and animation by breaking down recent projects from the panelists.

Kyle Snyder will present audio from the "Media in Medicine" documentary The Veterans’ Project, a film that seeks to bring civilian physicians and the American public closer to the individual voices of veterans of all ages, as they return to civilian life and seek to live long productive lives after they have served. The Veterans’ Project is a feature-length documentary that weaves together the testimonies of injured or ill combat veterans who navigate the complexities of military, VA, and civilian medical systems in seeking treatment and reintegration into the civilian world.

Rodzina Treflików presented by Kris Górski is a classic, stop motion animation series for children. It uses original solutions for the mouth animation and generally speaking tries to combine the
old and the new techniques of today to make an appealing proposition to the youngest audience. It is meant primarily for airing on children network TV but it is also shown in selected movie theaters. At the present moment we are in the process of finishing the 4th season. The movie's soundtrack is produced in 5.1 and Stereo.


Friday, May 25, 13:30 — 15:00 (Arena 3 & 4)

T20 - Before the Studio: The Art of Preproduction

Bill Crabtree, Middle TN State University - Murfreesboro, TN, USA
Wes Maebe, RAK Studios/Sonic Cuisine - London, UK
Barry Marshall, The New England Institute of Art - Boston, MA, USA
Mandy Parnell, Black Saloon Studios - London, UK
Marek Walaszek, Addicted to Music Studio - Warsaw, Poland

Preproduction is as essential to the success of music productions as it is to the success of any film or television show. So why do so many young producers skimp-on or even skip preproduction? This panel will take a look at the preparation of both the act and the act’s material for the recording process. We will focus on lyric and melody as well as musical arrangements and instrumentation.


Friday, May 25, 14:00 — 15:00 (Lobby)


T21 - Kraftwerk and Booka Shade-The Challenge to Create Electro Pop Music in Immersive / 3D Audio

Tom Ammermann, New Audio Technology GmbH - Hamburg, Germany

Music has not a cinematic approach where spaceships are flying around the listeners. Nonetheless, music can become a fantastic spatial listening adventure in Immersive / 3D. How this sounds will be shown with the new Kraftwerk (Grammy nominated) and Booka Shade Blu-ray releases this year. Production philosophies, strategies, and workflows to create Immersive / 3D in current workflows and DAWs will be shown and explained.


Friday, May 25, 14:00 — 15:00 (Scala 1)


T22 - A Practical Use of Spatial Audio for Storytelling, and Different Delivery Platforms

Axel Drioli, 3D Sound Designer and Producer - London, UK / Lille, France

An overview of Spatial Audio's characteristics, strengths, and weaknesses for storytelling. A case study will be analyzed to better understand what can be done at every stage of the spatial audio production process, from recording to decoding to different delivery platforms. This presentation is tailored for an audience of content creators and engineers, who may not know enough (or not at all) the Spatial Audio capabilities for storytelling, using demos and examples to explain and clarify concepts.


Friday, May 25, 14:30 — 16:00 (Scala 3)

T23 - Optimizing Transducer Design for Systems with Adaptive Nonlinear Control

Gregor Höhne, Klippel GmbH - Dresden, Germany
Marco Raimondi, STMicroelectronics SRL - Cornaredo (MI), Italy

Modern loudspeakers increasingly incorporate digital signal processing, amplification and the transducer itself in one unit. The utilized algorithms not only comprise linear filters but adaptively control the system, deploying measured states and complex models. Systems with adaptive nonlinear control can be used to equalize, stabilize, linearize, and actively protect the transducer. Thus, more and more demands can be taken care of by digital signal processing than by pure transducer design, opening new degrees of freedom for the latter. The tutorial focuses on how this freedom can be utilized to design smaller and more efficient loudspeaker systems. Examples are given for how existing transducer designs can be altered to increase their efficiency and which new challenges arise when driving speaker design to its limits.


Friday, May 25, 15:15 — 16:15 (Scala 1)


T24 - 360 & VR - Create OZO Audio and Head Locked Spatial Music in Common DAWs

Tom Ammermann, New Audio Technology GmbH - Hamburg, Germany

VR and 360 are mainly headphone applications. Of course, binaural headphone virtualization, well known since decades, becomes important as never before. Fortunately current binaural virtualization technologies offer a fantastic new experience far away from creepy low quality ‘surround simulations’ in the past. The presentation will show how to create content for NOKIA’s new 360 audio format OZO Audio and how to create head locked binaural music in common DAWs.


Friday, May 25, 16:15 — 17:45 (Scala 3)


T25 - Live Sound Subwoofer System Optimization

Adam J. Hill, University of Derby - Derby, Derbyshire, UK

There is little reason this day in age to accept undesirable low-frequency sound coverage in live sound reinforcement. The theories behind subwoofer system optimization are well-known within academia and various branches of industry, although this knowledge isn't always fully-translated into practical terms for end-users. This tutorial provides a comprehensive overview of how to achieve desirable low-frequency sound coverage including: subwoofer polar response control, array and cluster configuration, signal routing/processing options, performance stage effects, source decorrelation, acoustic barriers and perceptual considerations. The tutorial is suitable for practitioners, academics and students, alike, providing practical approaches to low-frequency sound control and highlighting recent technological advancements.

AES Technical Council This session is presented in association with the AES Technical Committee on Acoustics and Sound Reinforcement


Saturday, May 26, 09:00 — 10:30 (Scala 1)

T26 - Cancelled


Saturday, May 26, 09:15 — 10:15 (Scala 2)


T27 - Hearing the Past: Using Acoustic Measurement Techniques and Computer Models to Study Heritage Sites

Mariana Lopez, University of York - York, UK

The relationship between acoustics and theater performances throughout history has been an area of extensive research. However, studies on pre-seventeenth century acoustics have focused either on Greek and Roman or Elizabethan theater. The study of medieval acoustics has been centered on churches both as worship spaces and as sites of liturgical drama, leaving aside the spaces used for secular drama performances. This tutorial explores how a combination of acoustic measurement techniques adapted to challenging outdoor spaces and the use of a multiplicity of computer models to tackle the unknowns regarding heritage sites can help enhance our knowledge on how our ancestors 'heard' drama performances in connection to the acoustics of the performance spaces and the surrounding soundscapes.

AES Technical Council This session is presented in association with the AES Technical Committee on Acoustics and Sound Reinforcement and AES Technical Committee on Archiving Restoration and Digital Libraries


Saturday, May 26, 10:30 — 12:00 (Scala 2)


T28 - Intelligent Acoustic Interfaces for High-Definition 3D Audio

Danilo Comminiello, Sapienza University of Rome - Rome, Italy

This tutorial aims at introducing a new paradigm for interpreting 3D audio in acoustic environments. Intelligent acoustic interfaces involve both sensors for data acquisition and signal processing methods with the aim of providing a high-quality 3D audio experience to a user. The tutorial explores the motivations for 3D intelligent interfaces. In that sense, the recent standardization of the MPEG-H has provided an incredible boost. Then, how to design 3D acoustic interfaces is analyzed, involving ambisonics and small arrays. Moreover, the main methodologies are introduced for the processing of recorded 3D audio signals in order to "provide intelligence" to interfaces. Here, we leverage the properties of signal processing in the quaternion domain. Finally, some examples of 3D audio applications are shown involving intelligent acoustic interfaces.


Saturday, May 26, 12:30 — 13:15 (Scala 1)

T29 - Icosahedral Loudspeaker Listening Session (IKO)

Frank Schultz, University of Music and Performing Arts Graz - Graz, Austria
Franz Zotter, IEM, University of Music and Performing Arts - Graz, Austria

In this listening session we show examples of how to employ focused sound beams, played by the powerful 20-channel spherical loudspeaker array IKO. Just as with sound bars but more powerfully, its beams enables utilizing the available wall reflections as passive surround system, but there is more to it. In our artistic research project OSIL, we explored artistic and psychoacoustical subtleties of the IKO. You are going to hear that sound positioning by beamforming in a room is not restricted to distinct wall reflections but supports a whole range of intermediate and room traversing positions. Artistically, the control of beam of the IKO allows to compose sculptural effects. We will explain and listen to some of the basic effects, before listening to advanced musical compositions.


Saturday, May 26, 13:30 — 14:15 (Scala 1)

T30 - Surround with Depth Listening Session on IEM Loudspeaker Cubes

Thomas Deppisch, University of Technology - Graz, Austria; University of Music and Performing Arts Graz - Graz, Austria
Matthias Frank, University of Music and Performing Arts Graz - Graz, Austria
Nils Meyer-Kahlen, University of Technology Graz - Graz, Austria; University of Music and Performaing Arts Graz
Franz Zotter, IEM, University of Music and Performing Arts - Graz, Austria

This listening event shows effects and music on a new quadraphonic surround-with-depth playback system, whose control, measurement, and design are described in our paper engineering brief at the convention. Our playback system consists of four loudspeaker cubes, each of which using four loudspeakers driven by a first-order beam control. We show how to involve reflections involved to create a stable sound image for a big audience, and how direct and first-order-reflected sound is used to get smooth surround panning (close sounds), while a set of second-order reflections permit panning to distant sounds. We will explain and play back the effects we programmed before listening to a mix of a surround-with-depth mixed song together.


Return to Tutorials