AES NEW YORK 2019
147th PRO AUDIO CONVENTION

AES New York 2019
Exhibits-Plus Badge Event Details

Wednesday, October 16, 9:00 am — 5:00 pm (SDA Booth)

Photo

Student / Career: SC00 - Resume Review (for Students, Recent Grads, and Young Professionals)

Moderator:
Alex Kosiorek, Central Sound at Arizona PBS - Phoenix, AZ, USA; Arizona State University - Phoenix, AZ, USA

Students, recent graduates and young professionals… Often your resume is an employer’s first impression of you. Naturally, you want to make a good one. Employer’s often use job search websites to search for candidates. Some use automated software to scan your resume and in some cases, your LinkedIn/social media profiles as well. Questions may arise regarding formatting, length, keywords and phrases so it shows up in searches and lands on the desk of the hiring manager. No matter how refined your resume may be, it is always good to have someone else review your materials. Receive a one-on-one 20-25 minute review of your resume from a hiring manager who is in the audio engineering business. Plus, if time allows, your cover letter and online presence will be reviewed as well.

Sign up at the student (SDA) booth immediately upon arrival. For those who would like to have your resume reviewed on Wednesday, October 17th prior to SDA-1, please email the request to: aesresumereview@outlook.com. You may be requested to upload your resume prior to your appointment for review. Uploaded resumes will only be seen by the moderator and will be deleted at the conclusion of the 147th Pro Audio Convention.

This review will take place during the duration of the convention by appointment only

 
 

Wednesday, October 16, 10:00 am — 6:00 pm

Exhibit: Exhibition

Leave plenty of time to walk the exhibits floor so you can learn about the latest products and technologies, scope out your competitors, or simply drool over the latest toys.

 
 

Wednesday, October 16, 10:00 am — 11:00 am (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM01 - Joe Chiccarelli

Presenter:
Joe Chiccarelli, Producer, mixer, engineer - Boston, MA, USA

 
 

Wednesday, October 16, 10:30 am — 11:00 am (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP01 - Introduction to the Audio/Video-over-IP Technology Pavilion

Presenter:
Terry Holton, AIMS Audio Group Chairman - UK - London, UK

The Audio/Video-over-IP Technology Pavilion is an initiative created by the AES in partnership with the Alliance for IP Media Solutions (AIMS). Following the success of last year’s pavilion which provided valuable insights and practical information about audio over IP networking, the scope of this year’s pavilion has been expanded to also include video and other related professional media networking topics. This presentation will provide an overview of the various aspects of the pavilion as well as an introduction to AES67 and its relationship to the SMPTE ST 2110 standard.

 
 

Wednesday, October 16, 10:30 am — 12:00 pm (South Concourse A)

Poster: P3 - Posters: Transducers

P3-1 Acoustic Beamforming on Transverse Loudspeaker Array Constructed from Micro-Speakers Point Sources for Effectiveness Improvement in High-Frequency RangeBartlomiej Chojnacki, AGH University of Science and Technology - Cracow, Poland; Mega-Acoustic - Kepno, Poland; Klara Juros, AGH University of Science and Technology - Cracow, Poland; Daniel Kaczor, AGH University of Science and Technology - Cracow, Poland; Tadeusz Kamisinski, AGH University of Science and Technology - Cracow, Poland
Variable directivity speaker arrays are very popular in many acoustic aspects, such as wearable systems, natural sources simulations or acoustic scanners. Standard systems constructed from traditional drivers, despite great DSP, have limited beamforming possibilities because of the very narrow directivity patterns for loudspeakers in high frequencies. This paper presents a new approach for micro-speakers array design from monopole sources, based on isobaric speaker configuration. New solutions allow to reach high efficiency in broadband frequency range keeping matrix size small. This presentation will contain an explanation of used isobaric speaker principles and comparisons between standard transverse transducers matrix and innovative point-source matrix in two configurations. Achieved results allow to improve beamforming effectiveness in a high frequency range with new driver matrix construction.
Convention Paper 10227 (Purchase now)

P3-2 Spherical Microphone Array Shape to Improve Beamforming PerformanceSakurako Yazawa, NTT - Tokyo, Japan; Hiroaki Itou, NTT - Tokyo, Japan; Ken'ichi Noguchi, NTT Service Evolution Laboratories - Yokosuka, Japan; Kazunori Kobayashi, NTT - Tokyo, Japan; Noboru Harada, NTT Communicatin Science Labs - Atsugi-shi, Kanagawa-ken, Japan
A 360-degree steerable super-directional beamforming are proposed. We designed a new acoustic baffle for spherical microphone array to achieve both small size and high performance. The shape of baffle is a sphere with parabola-like depressions; therefore, sound-collection performance can be enhanced using reflection and diffraction. We first evaluated its beamforming performance through simulation then fabricated a 3D prototype of an acoustic baffle microphone array with the proposed baffle shape and compared its performance to that of a conventional spherical 3D acoustic baffle. This prototype exhibited better beamforming performance. We built microphone array system that includes the proposed acoustic baffle and a 360-degree camera, our system can pick up match sound to an image in a specific direction in real-time or after recording. We have received high marks from users who experienced the system demo.
Convention Paper 10228 (Purchase now)

P3-3 Infinite Waveguide Termination by Series Solution in Finite Element AnalysisPatrick Macey, PACSYS Limited - Nottingham, UK
The acoustics of an audio system may comprise of several components, e.g., a compression driver producing plane waves, a transition connecting to the throat of a horn, and a cylindrical horn which is baffled at the mouth. While finite elements/boundary elements can model the entire system, it is advantageous from the design perspective to consider simplified systems. A compression driver might be used in many situations and should be designed radiating plane waves, without cross modes, into a semi-infinite tube. The pressure field in the tube can be represented by a series that is coupled to the finite element mesh by a DtN approach. The method is generalized to cater for ducts of arbitrary cross section and infinite cylindrical horns.
Convention Paper 10229 (Purchase now)

P3-4 Evaluating Listener Preference of Flat-Panel LoudspeakersStephen Roessner, University of Rochester - Rochester, NY, USA; Michael Heilemann, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
Three flat-panel loudspeakers and two conventional loudspeakers were evaluated in a blind listening test. Two of the flat-panel loudspeakers used in the test were prototypes employing both array-based excitation methods and constrained viscoelastic damping to eliminate modal resonant peaks in the mechanical response of the vibrating surface. The remaining flat-panel speaker was a commercially available unit. A set of 21 listeners reported average preference ratings of 7.00/10 and 6.81/10 for the conventional loudspeakers, 6.48/10 and 5.90/10 for the prototype flat-panel loudspeakers, and 2.24/10 for the commercial flat-panel speaker. The results are consistent with those given by a predictive model for listener preference rating, suggesting that designs aimed at smoothing the mechanical response of the panel lead to improved preference ratings.
Convention Paper 10230 (Purchase now)

P3-5 Modelling of a Chip Scale Package on the Acoustic Behavior of a MEMS MicrophoneYafei Nie, Institute of Acoustics, Chinese Academy of Sciences - Beijing, China; Jinqiu Sang, Chinese Academy of Sciences - Beijing, China; Chengshi Zheng, Institute of Acoustics, Chinese Academy of Sciences - Beijing, China; Xiaodong Li, Chinese Academy of Sciences - Beijing, China; Chinese Academy of Sciences - Shanghai, China
Micro-electro-mechanical system (MEMS) microphones have been widely used in the mobile devices in recent decades. The acoustic effects of a chip scale package on a MEMS microphone needs to be validated. Previously a lumped equivalent circuit model was adopted to analyze the acoustic frequency response of the package. However, such a theoretical model cannot predict performance at relatively high frequencies. In this paper a distributed parameter model was proposed to simulate the acoustic behavior of the MEMS microphone package. The model illustrates how the MEMS microphone acoustic transfer function is affected by the size of sound hole, the volumes of the front and back chamber. This model also can illustrate the mechanical response of the MEMS microphone. The proposed model provided a more reliable way towards an optimized MEMS package structure.
Convention Paper 10231 (Purchase now)

P3-6 Personalized and Self-Adapting Headphone Equalization Using Near Field ResponseAdrian Celestinos, Samsung Research America - Valencia, CA, USA; Elisabeth McMullin, Samsung Research America - Valencia, CA USA; Ritesh Banka, Samsung Research America - Valencia, CA USA; Pascal Brunet, Samsung Research America - Valencia, CA USA; Audio Group - Digital Media Solutions
Variability in the acoustical coupling of headphones to human ears depends on a number of factors. Placement, size of user’s head and ears, the headband and ear-pad material are all major contributors to the sound quality delivered by the headphone to the user. By measuring the transfer function from the driver terminals to a miniature microphone set near the driver inside the cavity produced by the headphone and the ear, the degree of acoustical coupling and the fundamental frequency of the cavity volume was acquired. An individualized equalization on these measurements was applied to every user. Listeners rated the personalized EQ significantly higher than a generic target response and slightly higher than the bypassed headphone.
Convention Paper 10232 (Purchase now)

P3-7 Applying Sound Equalization to Vibrating Sound Transducers Mounted on Rigid PanelsStefania Cecchi, Universitá Politecnica della Marche - Ancona, Italy; Alessandro Terenzi, Universita Politecnica delle Marche - Ancona, Italy; Francesco Piazza, Universitá Politecnica della Marche - Ancona (AN), Italy; Ferruccio Bettarelli, Leaff Engineering - Osimo, Italy
In recent years, loudspeaker manufacturers have proposed to the market vibrating sound transducers (also called shakers or exciters) that can be installed on a surface or a panel to be transformed in invisible speakers capable of delivering sound. These systems show different frequency behaviors mainly depending on the type and size of the surface. Therefore, an audio equalization is crucial to enhance the sound reproduction performance achieving flat frequency responses. In this paper a multi-point equalization procedure is applied to several surfaces equipped with vibrating transducers, showing its positive effect from objective and subjective point of view.
Convention Paper 10233 (Purchase now)

 
 

Wednesday, October 16, 11:00 am — 11:45 am (Recording Stage)

Photo

Project Studio Expo Recording Stage: RS01 - Sweetwater Presents: Everything You Need to Know about Microphones

Presenter:
Lynn Fuston, Sweetwater - Fort Wayne, IN, USA

Lynn Fuston covers the topic of microphones from the very basics to advanced applications, including a live vocal demo of different mics from Shure and Sennheiser/Neumann.

 
 

Wednesday, October 16, 11:00 am — 11:45 am (Live Production Stage)

Live Production Stage: LS01 - TBA

 
 

Wednesday, October 16, 11:00 am — 11:30 am (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP02 - Designing with Dante and AES67/SMPTE 2110

Presenter:
Patrick Killianey, Audinate - Buena Park, CA, USA

In September 2019, Audinate added support for SMPTE ST 2110 in Dante devices. In this presentation, Audinate will share its vision on how Dante and AES67/SMPTE 2110 can be used together in a modern studio, including Dante Domain Manager. A tour of Dante’s managed and unmanaged integration with open standards will lead to a discussion of the need for network security, Layer 3 traversal, and fault monitoring in real time. All will be laid out with practical diagrams of broadcast studios from location sound, to small studios and large properties for practical application.

 
 

Wednesday, October 16, 11:00 am — 12:00 pm (Mix with the Masters Workshop Stage)

AES Mix with the Masters Workshop: MM02 - Gavin Lurssen & Ruben Cohen

Presenters:
Reuben Cohen, Lurssen Mastering - Los Angeles, CA, USA
Gavin Lurssen, Lurssen Mastering - Los Angeles, CA, USA

 
 

Wednesday, October 16, 11:30 am — 12:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP03 - Audio Monitoring Solutions for Audio-over-IP

Presenter:
Aki Mäkivirta, Genelec Oy - Iisalmi, Finland

ST 2110 and AES67 have established audio-over-IP as the next generation standard for audio monitoring. This presentation discusses the benefits of using audio-over-IP over traditional audio monitoring methods, and why the change to audio-over-IP is happening with increasing speed. Practical case examples of using audio-over-IP in professional broadcasting as well as in AV install audio applications are presented.

 
 

Wednesday, October 16, 12:00 pm — 12:30 pm (Booth 266 (Ex. Fl.))

Audio Builders Workshop Booth Talk: ABT01 - Find Your DIY Voice, Hack Your Own Stuff

Presenter:
Michael Swanson

 
 

Wednesday, October 16, 12:00 pm — 1:30 pm (1E15+16)

Special Event: SE01 - Opening Ceremonies / Awards / Keynote Speech

Presenters:
Agnieszka Roginska, New York University - New York, NY, USA
Valerie Tyler, College of San Mateo - San Mateo, CA, USA
Jonathan Wyner, M Works Studios/iZotope/Berklee College of Music - Boston, MA, USA; M Works Mastering
Grandmaster Flash

Awards Presentation
Please join us as the AES presents Special Awards to those who have made outstanding contributions to the Society in such areas of research, scholarship, and publications, as well as other accomplishments that have contributed to the enhancement of our industry.

The Keynote Speaker for the 147th Convention is Grandmaster Flash. Emerging from the South Bronx in the early 1970s, Grandmaster Flash is inarguably one of Hip Hop’s original innovators. In the earliest days of the genre, he manipulated music by placing his fingers on the vinyl, perfected beat looping, and discovered many of the most iconic beats still commonly sampled today. His influence on how music is created has been profound and it’s no surprise that The New York Times calls him Hip Hop’s first virtuoso. The title of his address is “GRANDMASTER FLASH: EVOLUTION OF THE BEAT.

AES Technical Council This session is presented in association with the AES Technical Committee on Recording Technology and Practices

 
 

Wednesday, October 16, 12:00 pm — 12:45 pm (Recording Stage)

Photo

Project Studio Expo Recording Stage: RS02 - Make the Most of Your Vocals with Jack Joseph Puig

Presenter:
Jack Joseph Puig, Record Executive/Producer/Mixer - Hollywood, CA, USA

Multi Grammy Award winner Jack Joseph Puig, has had a successful and varied career, having worked with blues legend Eric Clapton and John Mayer; with roots rock revisionists like The Black Crowes, Sheryl Crow and The Counting Crows; with pop superstars like The Goo Goo Dolls, Robbie Williams, Lady Gaga, Florence and the Machine, and The Pussycat Dolls; country artists like Keith Urban, Faith Hill, and Sugarland, indie heroes Chris Isaak, Jellyfish, Dinosaur Jr, Guided By Voices, Beck, and as well as the Black Eyed Peas to Green Day, No Doubt, 311, U2, Weezer, Fiona Apple, Klaxons, Fergie, Mary J Blidge, Panic at the Disco and The Rolling Stones. In the process of building such a catalogue, Puig has won himself a Grammy Award and a strong reputation as a sound engineer.

 
 

Wednesday, October 16, 12:00 pm — 1:00 pm (Mix with the Masters Workshop Stage)

AES Mix with the Masters Workshop: MM03 - TBA

 
 

Wednesday, October 16, 12:00 pm — 12:30 pm (IP Pavilion Theater)

AoIP Pavilion: AIP04 - Deploying SMPTE ST 2110 in a Distributed Campus System

Presenters:
Cassidy Lee Phillips, Imagine Communications - Plano, TX, USA
Tony Pearson, North Carolina State University

When NC State University (NCSU) was looking to upgrade the Pro-AV gear driving its Distance Education and Learning Technology Applications (DELTA) program, they decided to future-proof their infrastructure with a standards-based approach to IP, while continuing to leverage existing SDI gear. Using SMPTE ST 2110, NCSU significantly multiplied the capacity of their fiber infrastructure — where four single-mode fiber strands had supported four HD video channels, now just two fiber strands deliver 32 bidirectional HD channels.

This presentation will share the real-world experiences and lessons learned from adopting AES67 and ST 2110 to successfully deploy a first-of-its-kind inter-campus Pro-AV system.

 
 

Wednesday, October 16, 12:30 pm — 1:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP05 - Introduction to AES70

Presenter:
Ethan Wetzell, OCA Alliance - New York, NY, USA

AES70, also known as OCA, is an architecture for system control and connection management of media networks and devices. AES70 is capable of working effectively with all kinds of devices from multiple manufacturers to provide fully interoperable multivendor networks. This session will provide an overview of the AES70 standard, including the current and future objectives of the standard.

 
 

Wednesday, October 16, 1:00 pm — 2:00 pm (EDM Stage)

Electronic Dance Music & DJ Stage: EDJ01 - Allen & Heath REELOP, XONE & HERCULES Presents:

Presenter:
Jamie Thompson

Bridging the Gap between Home Production and the Stage

Come learn what you will need to take your home music production to the stage and perform using tools like DJ mixers and midi controllers, as well as software programs like Ableton Live and Traktor Pro. There will be a live demonstration using all of these tools along with a Q&A segment to answer any questions you may have.

 
 

Wednesday, October 16, 1:00 pm — 1:45 pm (Live Production Stage)

Live Production Stage: LS02 - Shure Presents: Wireless Workflow Software

RF can be your sound system’s best friend, and its worst enemy. Wireless Workbench 6 is the Shure software solution that keeps the signal on your side, in every environment. In this session, the Shure product team will demonstrate how to remotely monitor and manage every piece of gear connected to your system, calculate and analyze frequencies so you can coordinate the entire show, and track RF data for later review using the Timeline feature. All this from one application. Find out how WWB6 can streamline your workflow so you can operate your wireless with more confidence.

 
 

Wednesday, October 16, 1:00 pm — 2:00 pm (Mix with the Masters Workshop Stage)

AES Mix with the Masters Workshop: MM04 - Leslie Brathwaite

Presenter:
Leslie Brathwaite

 
 

Wednesday, October 16, 1:00 pm — 1:30 pm (IP Pavilion Theater)

AoIP Pavilion: AIP25 - Audio in ST 2110 Facility and across WAN

Presenter:
Andy Rayner, Nevion

The ST2110 and NMOS standards are now maturing and in serious use globally. In an all-IP production environment, there are key requirements for the manipulation of audio within a facility. Further to this there are challenges of how essence-flow synchronization is automatically maintained. When sharing audio between facilities for remote/meshed/distributed production there are further challenges for integrity, timing and control.

The presentation will look at all of these facility and wide area challenges and use recent deployments as case studies to demonstrate how these have been overcome.

 
 

Wednesday, October 16, 1:30 pm — 2:30 pm (1E08)

Special Event: SE02 - Mixing & Mastering for Immersive Audio

Moderator:
Rafa Sardina, Fishbone Productions, Inc. - Los Angeles, CA, USA; AfterHours Studios - Los Angeles, CA, USA
Panelists:
Reuben Cohen, Lurssen Mastering - Los Angeles, CA, USA
Gavin Lurssen, Lurssen Mastering - Los Angeles, CA, USA
Michael Romanowski, Coast Mastering - Berkeley, CA, USA; The Tape Project
Ceri Thomas, Dolby Laboratories

As the industry moves toward a more immersive environment, hear from these five experts the latest news regarding workflow, standards, and how to get the maximum immersive experience for your tracks.

 
 

Wednesday, October 16, 2:00 pm — 2:45 pm (EDM Stage)

Photo

Electronic Dance Music & DJ Stage: EDJ02 - Waves Product Workshop

Presenter:
Michael Pearson-Adams, Waves - Knoxville, TN, USA

 
 

Wednesday, October 16, 2:00 pm — 2:45 pm (Live Production Stage)

Live Production Stage: LS03 - MILAN Protocol: Deterministic AV Networks

Presenters:
Tim Boot, Global Brand Strategist, Meyer Sound - Berkeley, CA, USA
Henning Kaltheuner, d&b audiotechnik GmbH - Backnang, Germany
Genio Kronauer, L-Acoustics - Marcoussis, France
Morten Lave, Adamson Systems Engineering

Today, interoperability between devices has become a key element of the AV industry’s value proposition, decreasing the value of isolated individual products and focusing more on an integrated system. The network, which provides connectivity for individual components to work together, now becomes the grid that defines system architectures. The future of AV requires more than just connecting individual products and components together – it requires value and functionality that can only come from deep system integration. However, today planning and handling of networks requires deep IT management skills. This is not what audio and systems engineers signed up for.

Created by Pro AV market leaders in Avnu Alliance, Milan is a standards-based, user-driven deterministic network protocol for professional media, that assures networked AV devices will work together at new levels of convenience, reliability and functionality.

Milan combines the technical benefits of the AVB standard with Pro AV market-defined device requirements at both the network and the application layer for media streams, formats, clocking and redundancy.

 
 

Wednesday, October 16, 2:00 pm — 2:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS03 - Waves Presents: JC Losada "Mr. Sonic"

Presenter:
JC Losada, Grammy and Latin Grammy Award winning Engineer & Producer

Juan Cristobal Losada is an accomplished songwriter, engineer, mixer, and producer, with a string of top-selling collaborations and credits including Shakira, Santana, Ricky Martin, Enrique Iglesias, and Plácido Domingo. He has a Latin Grammy award for his work on the Best Traditional Tropical Album To Beny Moré With Love by Jon Secada, and a Grammy Award for Best Tropical Latin Album category for Luis Enrique's Ciclos.

 
 

Wednesday, October 16, 2:00 pm — 3:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM05 - Eddie Kramer

Presenter:
Eddie Kramer, Remark Music Ltd. - Woodland Hills, CA, USA

 
 

Wednesday, October 16, 2:00 pm — 2:30 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP06 - Bolero Wireless Intercom System in AES67 Networks

Presenter:
Rick Seegull, Riedel Communications - Burbank, CA, USA

Riedel Communications will provide an overview of their Bolero wireless intercom system, including the newest AES67 mode that facilitates the distribution of antennas over AES67 networks. Then, after a review of how to assemble and configure a Bolero standalone system, two people will be randomly selected from the audience to compete to see who can configure a system in the shortest amount of time.

 
 

Wednesday, October 16, 2:30 pm — 3:00 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT02 - Get to Know Your Gear with Free Analysis Software

Presenter:
Peterson Goodwyn, DIY Recording Equipment - Philadephia, PA, USA

 
 

Wednesday, October 16, 2:30 pm — 3:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP07 - AES67 / ST 2110 / NMOS - An Overview on Current SDO Activities

Presenter:
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany

Update and report on current standardization activities, including AES67, SMPTE ST2110. Refresh / summary on commonalities and constraints between AES67 and ST2110. Brief overview on NMOS developments and activities.

 
 

Wednesday, October 16, 2:45 pm — 4:15 pm (1E15+16)

Special Event: SE03 - The Loudness War is Over (If You Want It)

Moderator:
George Massenburg, Schulich School of Music, McGill University - Montreal, Quebec, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, Quebec, Canada
Panelists:
Serban Ghenea
Gimel "Guru" Keaton
Bob Ludwig, Gateway Mastering Studios, Inc. - Portland, ME, USA
Thomas Lund, Genelec Oy - Iisalmi, Finland
Ann Mincieli, Jungle City Studios - New York, NY, USA

Now that streaming dominates the music listening landscape, it’s time to revisit what loudness really is and how to manage it. Companies including Apple, YouTube, and Spotify each have their own measurement standards and loudness targets, while today’s production paradigm often lacks a traditional infrastructure of project managers and gatekeepers with technical expertise. Artists and record companies—as they always have—want their songs to sound at least as loud as the ones playing before and after them. The stakes are high. So, what to do?

The truth is, we, the creators, are responsible for understanding all of the issues in the loudness discussion. No one else is going to do it. Join us for this lively and informative conversation with some of the best minds in the business who will shed light on both the current unhappy state of loudness and what creators can do to make it better.

 
 

Wednesday, October 16, 3:00 pm — 3:30 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT03 - DIY DSP Platform—The SHARC Audo Module

Presenter:
Denis Labrecque, DeLab Consulting - Half Moon Bay, CA, USA

 
 

Wednesday, October 16, 3:00 pm — 4:00 pm (Recording Stage)

Project Studio Expo Recording Stage: RS04 - AMBEO: 3D Audio Technology by Sennheiser

Presenter:
Greg Simon, Sennheiser

Greg Simon from Sennheiser will discuss Ambeo technology, products and applications, including the work flow of Ambisonic recordings.

 
 

Wednesday, October 16, 3:00 pm — 3:45 pm (Live Production Stage)

Photo

Live Production Stage: LS04 - Dante for Live Production, Broadcasting and Recording

Presenter:
Mike Picotte, Producer Engineer Sweetwater Sound - Fort Wayne, IN, USA

Expert Engineer and Installer Mike Picotte will show you Dante Basics and how to avoid common mistakes. He will take a deeper dive into using Dante Control Manager, Virtual Dante Sound Card, Clocking via Dante, Switches, Peripherals that can up your game. Mike will show how to scale and systems from a Club to an Arena, and from multi building Campus to Houses of Worship. You will leave more confident and ready to move into the future of Broadcast, Live, and Recorded digital audio with Dante!

 
 

Wednesday, October 16, 3:00 pm — 4:30 pm (South Concourse A)

Poster: P06 - Posters: Audio Signal Processing

P06-1 Modal Representations for Audio Deep LearningTravis Skare, CCRMA, Stanford University - Stanford, CA, USA; Jonathan S. Abel, Stanford University - Stanford, CA, USA; Julius O. Smith, III, Stanford University - Stanford, CA, USA
Deep learning models for both discriminative and generative tasks have a choice of domain representation. For audio, candidates are often raw waveform data, spectral data, transformed spectral data, or perceptual features. For deep learning tasks related to modal synthesizers or processors, we propose new, modal representations for data. We experiment with representations such as an N-hot binary vector of frequencies, or learning a set of modal filterbank coefficients directly. We use these representations discriminatively–classifying cymbal model based on samples–as well as generatively. An intentionally naive application of a basic modal representation to a CVAE designed for MNIST digit images quickly yielded results, which we found surprising given less prior success when using traditional representations like a spectrogram image. We discuss applications for Generative Adversarial Networks, towards creating a modal reverberator generator.
Convention Paper 10248 (Purchase now)

P06-2 Distortion Modeling of Nonlinear Systems Using Ramped-Sines and Lookup TablePaul Mayo, University of Maryland - College Park, MD, USA; Wesley Bulla, Belmont University - Nashville, TN, USA
Nonlinear systems identification is used to synthesize black-box models of nonlinear audio effects and as such is a widespread topic of interest within the audio industry. As a variety of implementation algorithms provide a myriad of approaches, questions arise whether there are major functional differences between methods and implementations. This paper presents a novel method for the black-box measurement of distortion characteristic curves and an analysis of the popular “lookup table” implementation of nonlinear effects. Pros and cons of the techniques are examined from a signal processing perspective and the basic limitations and efficiencies of the approaches are discussed.
Convention Paper 10249 (Purchase now)

P06-3 An Open Audio Processing Platform Using SoC FPGAs and Model-Based DevelopmentTrevor Vannoy, Montana State University - Bozeman, MT, USA; Flat Earth Inc. - Bozeman, MT, USA; Tyler Davis, Flat Earth Inc. - Bozeman, MT, USA; Connor Dack, Flat Earth Inc. - Bozeman, MT, USA; Dustin Sobrero, Flat Earth Inc. - Bozeman, MT, USA; Ross Snider, Montana State University - Bozeman, MT, USA; Flat Earth Inc. - Bozeman, MT, USA
The development cycle for high performance audio applications using System-on-Chip (SoC) Field Programmable Gate Arrays (FPGAs) is long and complex. To address these challenges, an open source audio processing platform based on SoC FPGAs is presented. Due to their inherently parallel nature, SoC FPGAs are ideal for low latency, high performance signal processing. However, these devices require a complex development process. To reduce this difficulty, we deploy a model-based hardware/software co-design methodology that increases productivity and accessibility for non-experts. A modular multi-effects processor was developed and demonstrated on our hardware platform. This demonstration shows how a design can be constructed and provides a framework for developing more complex audio designs that can be used on our platform.
Convention Paper 10250 (Purchase now)

P06-4 Objective Measurement of Stereophonic Audio Quality in the Directional Loudness DomainPablo Delgado, International Audio Laboratories Erlangen - Erlangen, Germany; Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Jürgen Herre, International Audio Laboratories Erlangen - Erlangen, Germany; Fraunhofer IIS - Erlangen, Germany
Automated audio quality prediction is still considered a challenge for stereo or multichannel signals carrying spatial information. A system that accurately and reliably predicts quality scores obtained by time-consuming listening tests can be of great advantage in saving resources, for instance, in the evaluation of parametric spatial audio codecs. Most of the solutions so far work with individual comparisons of distortions of interchannel cues across time and frequency, known to correlate to distortions in the evoked spatial image of the subject listener. We propose a scene analysis method that considers signal loudness distributed across estimations of perceived source directions on the horizontal plane. The calculation of distortion features in the directional loudness domain (as opposed to the time-frequency domain) seems to provide equal or better correlation with subjectively perceived quality degradation than previous methods, as con?rmed by experiments with an extensive database of parametric audio codec listening tests. We investigate the effect of a number of design alternatives (based on psychoacoustic principles) on the overall prediction performance of the associated quality measurement system.
Convention Paper 10251 (Purchase now)

P06-5 Detection of the Effect of Window Duration in an audio Source Separation ParadigmRyan Miller, Belmont University - Nashville, TN, USA; Wesley Bulla, Belmont University - Nashville, TN, USA; Eric Tarr, Belmont University - Nashville, TN, USA
Non-negative matrix factorization (NMF) is a commonly used method for audio source separation in applications such as polyphonic music separation and noise removal. Previous research evaluated the use of additional algorithmic components and systems in efforts to improve the effectiveness of NMF. This study examined how the short-time Fourier transform (STFT) window duration used in the algorithm might affect detectable differences in separation performance. An ABX listening test compared speech extracted from two types of noise-contaminated mixtures at different window durations to determine if listeners could discriminate between them. It was found that the window duration had a significant impact on subject performance in both white- and conversation-noise cases with lower scores for the latter condition.
Convention Paper 10252 (Purchase now)

P06-6 Use of DNN-Based Beamforming Applied to Different Microphone Array ConfigurationsTae Woo Kim, Gwangju Institute of Science and Technology (GIST) - Gwangju, South Korea; Nam Kyun Kim, Gwangju Institute of Science and Technology (GIST) - Gwangju, South Korea; Geon Woo Lee, Gwangju Institute of Science and Technology (GIST) - Gwangju. Korea; Inyoung Park, Gwangju Institute of Science and Technology (GIST) - Gwangju, South Korea; Hong Kook Kim, Gwangju Institute of Science and Tech (GIST) - Gwangju, Korea
Minimum variance distortionless response (MVDR) beamforming is one of the most popular multichannel signal processing techniques for dereverberation and/or noise reduction. However, the MVDR beamformer has the limitation that it must be designed to be dependent on the receiver array geometry. This paper demonstrates an experimental setup and results by designing a deep learning-based MVDR beamformer and applying it to different microphone array configurations. Consequently, it is shown that the deep learning-based MVDR beamformer provides more robust performance under mismatched microphone array configurations than the conventional statistical MVDR one.
Convention Paper 10253 (Purchase now)

P06-7 Deep Neural Network Based Guided Speech Bandwidth ExtensionKonstantin Schmidt, Friedrich-Alexander-University (FAU) - Erlangen, Germany; International Audio Laboratories Erlangen - Erlangen; Bernd Edler, Friedrich Alexander University - Erlangen-Nürnberg, Germany; Fraunhofer IIS - Erlangen, Germany
Up to today telephone speech is still limited to the range of 200 to 3400 Hz since the predominant codecs in public switched telephone networks are AMR-NB, G.711, and G.722 [1, 2, 3]. Blind bandwidth extension (blind BWE, BBWE) can improve the perceived quality as well as the intelligibility of coded speech without changing the transmission network or the speech codec. The BBWE used in this work is based on deep neural networks (DNNs) and has already shown good performance [4]. Although this BBWE enhances the speech without producing too many artifacts it sometimes fails to enhance prominent fricatives that can result in muffled speech. In order to better synthesize prominent fricatives the BBWE is extended by sending a single bit of side information—here referred to as guided BWE. This bit may be transmitted, e.g., by watermarking so that no changes to the transmission network or the speech codec have to be done. Different DNN con?gurations (including convolutional (Conv.) layers as well as long short-term memory layers (LSTM)) making use of this bit have been evaluated. The BBWE has a low computational complexity and an algorithmic delay of 12 ms only and can be applied in state-of-the-art speech and audio codecs.
Convention Paper 10254 (Purchase now)

P06-8 Analysis of the Sound Emitted by Honey Bees in a BeehiveStefania Cecchi, Universitá Politecnica della Marche - Ancona, Italy; Alessandro Terenzi, Universita Politecnica delle Marche - Ancona, Italy; Simone Orcioni, Universita Politecnica delle Marche - Ancona, Italy; Francesco Piazza, Universitá Politecnica della Marche - Ancona (AN), Italy
The increasing in honey bee mortality of the last years has brought great attention on the possibility of intensive bee hive monitoring in order to better understand the problems that are seriously affecting the honey bee health. It is well known that sound emitted inside a beehive is one of the key parameters for a non-invasive monitoring capable of determining some aspects of their condition. The proposed work aims at analyzing the bees’ sound introducing features extraction useful for sound classification techniques and to determine dangerous situations. Taking into consideration a real scenario, several experiments have been performed focusing on particular events, such as swarming, to highlight the potentiality of the proposed approach.
Convention Paper 10255 (Purchase now)

P06-9 Improvement of DNN-Based Speech Enhancement with Non-Normalized Features by Using an Automatic Gain ControlLinjuan Cheng, Institute of Acoustics, Chinese Academy of Sciences - Beijing, China; Chengshi Zheng, Institute of Acoustics, Chinese Academy of Sciences - Beijing, China; Renhua Peng, Chinese Academy of Sciences - Beijing, China; Xiaodong Li, Chinese Academy of Sciences - Beijing, China; Chinese Academy of Sciences - Shanghai, China
Speech enhancement performance may degrade when the peak level of the noisy speech is significantly different from the training datasets in Deep Neural Networks (DNN)-based speech enhancement algorithms, especially when the non-normalized features are used in practical applications, such as log-power spectra. To overcome this shortcoming, we introduce an automatic gain control (AGC) method as a preprocessing technique. By doing so, we can train the model with the same peak level of all the speech utterances. To further improve the proposed DNN-based algorithm, the feature compensation method is combined with the AGC method. Experimental results indicate that the proposed algorithm can maintain consistent performance when the peak of the noisy speech changes in a large range.
Convention Paper 10256 (Purchase now)

 
 

Wednesday, October 16, 3:00 pm — 4:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM06 - Sylvia Massy

Presenter:
Sylvia Massy, Unhinged Industries - Ashland

 
 

Wednesday, October 16, 3:00 pm — 3:30 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP08 - Innovation through Open Technologies

Presenter:
Nestor Amaya, Ross Video / COVELOZ Technologies - Ottawa, ON, Canada

Where would today's Cloud, smartphones and embedded IoT engines be without Linux? And could we have gotten here with a proprietary OS such as Windows? Similarly, this presentation argues that Pro Audio/Video applications need open technologies to serve the unique needs of their users. Our markets need a fully open stack, beyond just AES67/ST 2110 transport, to reap the benefits of our investment in A/V networking technologies. Learn how NMOS, EmBER+ and similar open technologies enable you to innovate and serve your users better when compared to proprietary control systems.

 
 

Wednesday, October 16, 3:30 pm — 4:00 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT04 - Open Source Modular Hardware for Analog and DSP Audio System Building

Presenter:
Brewster LaMacchia, Clockworks Signal Processing LLC - Andover, MA, USA

 
 

Wednesday, October 16, 3:30 pm — 4:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP09 - Reinventing Intercom with SMPTE ST 2110-30

Presenter:
Martin Dyster, The Telos Alliance - Cleveland, OH USA

This presentation looks at the parallels between the emergence of audio over IP standards and the development of a product in the Intercom market sector that has taken full advantage of IP technology.

 
 

Wednesday, October 16, 4:00 pm — 4:45 pm (Live Production Stage)

Live Production Stage: LS05 - Audio Codec Technology for Reliable Transport over Unreliable Networks – IP Audio in Broadcast Applications

IP Audio Codec technology for reliable transport over unreliable networks is a class designed to educate attendees about the world of IP Audio Codecs for Broadcast Applications. This class will give attendees the foundations of IP Audio, IP Audio Transport over a LAN / WAN, as well as the keys to reliable audio transport for Broadcast-Critical applications.

Key topics will include:
· Relationship between network speed, audio compression, and latency
· Choosing the right audio compression algorithm for your given application
· Error correction technology available to insure secure and reliable audio transport
· Key technologies available for broadcasting Dante, MADI, AES 67, AES/EBU using your existing internet connection
· Cost-effective solutions available for Remote Broadcasting, STL, SSL, DVB Audio, and Web Radio

This presentation will look into the advantages each technology has to offer, what they are, where and how they are used, and answer any questions the participants have.

 
 

Wednesday, October 16, 4:00 pm — 4:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS05 - SoundGirls Presents: What it Takes to Have a Successful Career in Audio

Moderator:
Leslie Gaston-Bird, Mix Messiah Productions - Brighton, UK; Audio Engineering Society - London, UK
Presenter:
Karrie Keyes, Executive Director SoundGirls.org
Panelists:
Piper Payne, Piper Payne, Mastering Engineer, Infrasonic Sound - San Francisco Bay Area, CA
Michelle Sabolchick Pettinato, Sound Engineer- Elvis Costello - Scranton, PA USA; MixingMusicLive.com
Jessica Thompson, Jessica Thompson Audio - Berkeley, CA, USA
April Tucker
Catherine Vericolli, Fivethirteen Recording - Tempe, AZ, USA; Useful Industries - Nashville, TN, USA

 
 

Wednesday, October 16, 4:00 pm — 5:00 pm (Mix with the Masters Workshop Stage)

AES Mix with the Masters Workshop: MM07 - Bob Power

Presenter:
Bob Power

 
 

Wednesday, October 16, 4:00 pm — 4:30 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP10 - AES67 / ST 2110 Audio Transport & Routing (NMOS IS-08)

Presenter:
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany

This session explains the details of networked audio transport (as defined in AES67 and ST 2110) and how connections among devices with dedicated channel mapping utilizing NMOS IS-08 can be established.

 
 

Wednesday, October 16, 4:30 pm — 5:30 pm (1E15+16)

Special Event: SE04 - The Making of Sheryl Crow's "Threads"

Moderator:
Glenn Lorbecki, Glenn Sound Inc. - Seattle, WA, USA
Panelist:
Dave O'Donnell, Mixer/Engineer

An analysis of the making of the new Sheryl Crow record Threads featuring a star-studded cast of talented artists.

 
 

Wednesday, October 16, 4:30 pm — 5:00 pm (Booth 266 (Ex. Fl.))

Audio Builders Workshop Booth Talk: ABT05 - Healing Power of DIY Gear

Presenter:
Buddy Lee Dobberteen

 
 

Wednesday, October 16, 4:30 pm — 5:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP11 - Technical: Synchronization & Alignment (ST 2110 / AES67)

Presenter:
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany

A deep dive into the secrets behind timing & synchronization of AES67 & ST2110 streams. The meaning of PTP, media clocks, RTP, synchronization parameters (SDP) and the magic of stream alignment will be unveiled in this compact presentation.

 
 

Wednesday, October 16, 5:00 pm — 5:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS06 - Kimbra - Technology Facilitates Creativity

Presenters:
Kimbra
Chris Tabron

Acclaimed singer/songwriter Kimbra talks with producer Chris Tabron about the creative process in recording.

 
 

Wednesday, October 16, 5:00 pm — 5:45 pm (Live Production Stage)

Live Production Stage: LS06 - Lawo Presents: Mixing Music for TV Broadcast

Presenter:
Josiah Gluck

Josiah Gluck, Emmy-award winning sound engineer, takes a look at the special demands and challenges of mixing live music for broadcast. Josiah will explain the process of creating some of the most iconic and recognizable TV live music shows including Saturday Night Live – and how to turn inspiration into success.

Josiah Gluck just started his 28th season as Co-Music Engineer for "Saturday Night Live." He received an Emmy for his music mixing work on the SNL 40th Anniversary Show. He was previously nominated for an Emmy for music mixing on the 2nd live “30Rock” show broadcast from Studio 8H.

Also recipient of 3 Grammy nominations for engineering, Josiah has been the producer and/or engineer on over 200 CDs for artists such as Karrin Allyson, Kevin Eubanks, Dave Grusin, Diane Schuur, Nnenna Freelon, Curtis Stigers, Patti Austin, B.B. King, Billy Cobham, Dave Stryker. He’s also engineered numerous CDs for USAF Bands all over the country.

Additional television work includes "Night Music," "The Rosie O’Donnell Show," "Late Night with Conan O’Brien," “Last Call with Carson Daly,” and “The Tonight Show” and "Christmas In Rockefeller Center."

 
 

Wednesday, October 16, 5:00 pm — 6:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM08 - Chris Lord-Alge

Presenter:
Chris Lord-Alge, Mix LA - Los Angeles, CA, USA

 
 

Wednesday, October 16, 5:00 pm — 5:30 pm (IP Pavilion Theater)

AoIP Pavilion: AIP12 - ST 2110 Enabled Centralized Production

Presenter:
Lucas Zwicker, Lawo AG

While traditional OB trucks still play a major role in broadcast, more and more companies rely on centralized workflows and remote production. How can ST 2110 help to enable these workflows? And why does it help to streamline REMI procedures? Let us find out about the building blocks of this approach based on a real life example.

 
 

Wednesday, October 16, 5:30 pm — 7:00 pm (Off-Site 2)

Special Event: SE00 - Diversity & Inclusion Cocktail Party

Houndstooth Pub
520 8th Avenue @ 37th Street

The AES Diversity & Inclusion Committee has been set up to acknowledge, celebrate and encourage diversity within the audio engineering community. This informal meet-and-greet is a fantastic opportunity to schmooze with our committee and to meet one another.

 
 

Wednesday, October 16, 5:30 pm — 6:00 pm (Demo rooms)

Product Development: PD05 - Vendor Event 1: Audio Precision

 
 

Thursday, October 17, 9:00 am — 5:00 pm (SDA Booth)

Photo

Student / Career: SC07 - Resume Review (for Students, Recent Grads, and Young Professionals)

Moderator:
Alex Kosiorek, Central Sound at Arizona PBS - Phoenix, AZ, USA; Arizona State University - Phoenix, AZ, USA

Students, recent graduates and young professionals… Often your resume is an employer’s first impression of you. Naturally, you want to make a good one. Employer’s often use job search websites to search for candidates. Some use automated software to scan your resume and in some cases, your LinkedIn/social media profiles as well. Questions may arise regarding formatting, length, keywords and phrases so it shows up in searches and lands on the desk of the hiring manager. No matter how refined your resume may be, it is always good to have someone else review your materials. Receive a one-on-one 20-25 minute review of your resume from a hiring manager who is in the audio engineering business. Plus, if time allows, your cover letter and online presence will be reviewed as well.

Sign up at the student (SDA) booth immediately upon arrival. For those who would like to have your resume reviewed on Wednesday, October 17th prior to SDA-1, please email the request to: aesresumereview@outlook.com. You may be requested to upload your resume prior to your appointment for review. Uploaded resumes will only be seen by the moderator and will be deleted at the conclusion of the 147th Pro Audio Convention.

This review will take place during the duration of the convention by appointment only

 
 

Thursday, October 17, 9:00 am — 10:30 am (South Concourse A)

Poster: P09 - Posters: Applications in Audio

P09-1 Analyzing Loudness Aspects of 4.2 Million Musical Albums in Search of an Optimal Loudness Target for Music StreamingEelco Grimm, HKU University of the Arts - Utrecht, Netherlands; Grimm Audio - Eindhoven, The Netherlands
In cooperation with music streaming service Tidal, 4.2 million albums were analyzed for loudness aspects such as loudest and softest track loudness. Evidence of development of the loudness war was found and a suggestion for music streaming services to use album normalization at –14 LUFS for mobile platforms and –18 LUFS or lower for stationary platforms was derived from the data set and a limited subject study. Tidal has implemented the recommendation and reports positive results.
Convention Paper 10268 (Purchase now)

P09-2 Audio Data Augmentation for Road Objects Classification by an Artificial Neural NetworkOhad Barak, Mentor Graphics - Mountain View, CA, USA; Nizar Sallem, Mentor Graphics - Mountain View, CA, USA
Following the resurgence of machine learning within the context of autonomous driving, the need for acquiring and labeling data expanded by folds. Despite the large amount of available visual data (images, point clouds, . . . ), researchers apply augmentation techniques to extend the training dataset, which improves the classification accuracy. When trying to exploit audio data for autonomous driving, two challenges immediately surfaced: first, the lack of available data and second, the absence of augmentation techniques. In this paper we introduce a series of augmentation techniques suitable for audio data. We apply several procedures, inspired by data augmentation for image classification, that transform and distort the original data to produce similar effects on sound. We show the increase in overall accuracy of our neural network for sound classification by comparing it to the non-augmented version.
Convention Paper 10269 (Purchase now)

P09-3 Is Binaural Spatialization the Future of Hip-Hop?Kierian Turner, University of Lethbridge - Lethbridge, AB, Canada; Amandine Pras, Digital Audio Arts - University of Lethbridge - Lethbridge, Alberta, Canada; School for Advanced Studies in the Social Sciences - Paris, France
Modern hip-hop is typically associated with samples and MIDI and not so much with creative source spatialization since the energy-driving elements are usually located in the center of a stereo image. To evaluate the impact of certain element placements behind, above, or underneath the listener on the listening experience, we experimented beyond standard mixing practices by spatializing beats and vocals of two hip-hop tracks in different ways. Then, 16 hip-hop musicians, producers, and enthusiasts, and three audio engineers compared a stereo and a binaural version of these two tracks in a perceptual experiment. Results showed that hip-hop listeners expect a few elements, including the vocals, to be mixed conventionally in order to create a cohesive mix and to minimize distractions.
Convention Paper 10270 (Purchase now)

P09-4 Alignment and Timeline Construction for Incomplete Analogue Audience Recordings of Historical Live Music ConcertsThomas Wilmering, Queen Mary University of London - London, UK; Centre for Digital Music (C4DM); Florian Thalmann, Queen Mary University of London - London, UK; Mark Sandler, Queen Mary University of London - London, UK
Analogue recordings pose specific problems during automatic alignment, such as distortion due to physical degradation, or differences in tape speed during recording, copying, and digitization. Oftentimes, recordings are incomplete, exhibiting gaps with different lengths. In this paper we propose a method to align multiple digitized analogue recordings of same concerts of varying quality and song segmentations. The process includes the automatic construction of a reference concert timeline. We evaluate alignment methods on a synthetic dataset and apply our algorithm to real-world data.
Convention Paper 10271 (Purchase now)

P09-5 Noise Robustness Automatic Speech Recognition with Convolutional Neural Network and Time Delay Neural NetworkJie Wang, Guangzhou University - Guangzhou, China; Dunze Wang, Guangzhou University - Guangzhou, China; Yunda Chen, Guangzhou University - Guangzhou, China; Xun Lu, Power Grid Planning Center, Guandgong Power Grid Company - Guangdong, China; Chengshi Zheng, Institute of Acoustics, Chinese Academy of Sciences - Beijing, China
To improve the performance of automatic speech recognition in noisy environments, the convolutional neural network (CNN) combined with time-delay neural network (TDNN) is introduced, which is referred as CNN-TDNN. The CNN-TDNN model is further optimized by factoring the parameter matrix in the time-delay neural network hidden layers and adding a time-restricted self-attention layer after the CNN-TDNN hidden layers. Experimental results show that the optimized CNN-TDNN model has better performance than DNN, CNN, TDNN, and CNN-TDNN. The average recognition word error rate (WER) can be reduced by 11.76% when comparing with the baselines.
Convention Paper 10272 (Purchase now)

 
 

Thursday, October 17, 9:30 am — 11:00 am (1E15+16)

Special Event: HH01 - Chopped and Looped—Inside the Art of Sampling for Hip-Hop

Moderator:
Paul "Willie Green" Womack, Willie Green Music - Brooklyn, NY, USA
Panelists:
Just Blaze, Jay-Z, Kanye West
Breakbeat Lou, Ultimate Breaks and Beats
Hank Shocklee, Shocklee Entertainment - New York, NY, USA
Ebonie Smith, Atlantic Records/Hamilton Cast Album

Celebrating the art of the audio collage, the panelists will discuss the production technique that launched a genre. Exploring the process of digging for samples, techniques and tools for composition, and concepts for processing and mixing, this panel will cover all aspects of sample driven production.

 
 

Thursday, October 17, 10:00 am — 6:00 pm

Exhibit: Exhibition

Leave plenty of time to walk the exhibits floor so you can learn about the latest products and technologies, scope out your competitors, or simply drool over the latest toys.

 
 

Thursday, October 17, 10:00 am — 11:00 am (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM09 - Peter Katis

Presenter:
Peter Katis, Tarquin Studios - Bridgport, CT, USA

 
 

Thursday, October 17, 10:30 am — 11:00 am (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP13 - Introduction to the Audio/Video-over-IP Technology Pavilion

Presenter:
Terry Holton, AIMS Audio Group Chairman - UK - London, UK

The Audio/Video-over-IP Technology Pavilion is an initiative created by the AES in partnership with the Alliance for IP Media Solutions (AIMS). Following the success of last year’s pavilion which provided valuable insights and practical information about audio over IP networking, the scope of this year’s pavilion has been expanded to also include video and other related professional media networking topics. This presentation will provide an overview of the various aspects of the pavilion as well as an introduction to AES67 and its relationship to the SMPTE ST 2110 standard.

 
 

Thursday, October 17, 10:45 am — 12:15 pm (South Concourse A)

Student / Career: SC08 - Saul Walker Student Design Competition

All accepted entries to the AES Saul Walker Student Design Competition are given the opportunity to show off their designs at this poster/tabletop exhibition. The session is an opportunity for aspiring student hardware and software engineers to have their projects seen by the AES design community. It is an invaluable career-building event and a great place for companies to identify their next employees. Students from both audio and non-audio backgrounds are encouraged to participate. Few restrictions are placed on the nature of the projects, which may include loudspeaker designs, DSP plug-ins, analog hardware, signal analysis tools, mobile applications, and sound synthesis devices. Attendees will observe new, original ideas implemented in working-model prototypes.

 
 

Thursday, October 17, 11:00 am — 11:30 am (Booth 266 (Ex. Fl.))

Audio Builders Workshop Booth Talk: ABT06 - Find Your DIY Voice, Hack Your Own Stuff

Presenter:
Michael Swanson

 
 

Thursday, October 17, 11:00 am — 11:45 am (Live Production Stage)

Live Production Stage: LS07 - Meyer Sound Presents: M-Noise 101

 
 

Thursday, October 17, 11:00 am — 11:45 am (Recording Stage)

Project Studio Expo Recording Stage: RS07 - SonicScoop Podcast LIVE: Monitors, Acoustics, and Correction: The Three Keys to Upgrading Your Control Room

Presenter:
Justin Colletti

Good monitoring is arguably the most important asset in any professional studio, and bad monitoring is a liability that is a major challenge to overcome. Whatever budget and room size you have to work with, getting the most out of your listening environment hinges on three things: 1. Selecting the right speakers for your room and your needs, 2. Having a smart and cost-effective plan for acoustic treatment and 3. Using room and speaker correction to keep coloration to an absolute minimum. In this live recording of the SonicScoop Podcast, Justin Colletti invites three special guests to talk about these three most essential elements in upgrading your studio's monitoring situation so that you can finally trust what you hear and make great-sounding work with a minimum of guesswork.

 
 

Thursday, October 17, 11:00 am — 12:00 pm (EDM Stage)

Electronic Dance Music & DJ Stage: EDJ09 - AVID

 
 

Thursday, October 17, 11:00 am — 12:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM10 - Michael Brauer

Presenter:
Michael Brauer, Michael Brauer - New York, NY, USA

 
 

Thursday, October 17, 11:00 am — 11:30 am (IP Pavilion Theater)

AoIP Pavilion: AIP14 - How to Configure Your RAVENNA Setup with Globcon - New, “Global Control” Software Platform

Presenter:
Luca Giaroli, DirectOut

Globcon is a new, free, global control software platform for the management of professional audio equipment. It has been conceived and designed from the end-user’s point of view to allow consistent and coherent control of multiple pieces of equipment from different brands as if they were just one. The goal of globcon is to offer to professional users an ever-increasingly powerful tool to manage environments of any size in a modern, ergonomic, flexible, efficient and cost-effective way. In September 2019 the official version of globcon has been released, following the completion of a successful beta testing program that has proved the stability and capabilities of the software. RAVENNA partner DirectOut is the first adopter of globcon, many other RAVENNA partners are currently working on adding globcon support.

 
 

Thursday, October 17, 11:15 am — 12:15 pm (1E15+16)

Special Event: SE05 - Show Me the Money: Funding Your Audio Dream

Moderator:
Heather D. Rafter, RafterMarsh US - San Francisco, CA, USA
Panelists:
Phil Dudderidge, Executive Chariman, Focusrite PLC - High Wycombe, Bucks, UK
Mark Ethier, CEO, iZotope - Cambridge, MA, USA
Ethan Jacks, Founder, MediaBridge Capital - Boston, MA, USA
Piper Payne, Piper Payne, Mastering Engineer, Infrasonic Sound - San Francisco Bay Area, CA
Wisam Reid, Harvard Medical School - Cambridge, MA, USA; MIT

This panel of industry insiders will share their tips on funding your audio passion, whether you’re a student, start up, or an established company wishing to expand. We’ll take you through every scenario: from scholarships and grants, to crowd funding via Kickstarter and other campaigns, and onto raising money through friends and family rounds and more. We’ll demystify venture capital, debt financing, investment banking, and private equity, and we’ll also explore growth through merger and acquisition or IPO. Whether you’re a student, solo audio developer, new or well-established company, this program will guide you through the financing path that best meets your needs.

 
 

Thursday, October 17, 11:30 am — 12:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP15 - JT-NM Tested Program - Test Plans and Results

Presenter:
Ievgen Kostiukevych, European Broadcasting Union - Le Grand-Saconnex, Genéve, Switzerland

The JT-NM Tested Program was repeated in August 2019 with the addition of AMWA NMOS/JT-NM TR-1001-1 testing. New revisions of the test plans were produced. What does all this mean to the end customers? The editor and coordinator of the program will explain the reasoning behind, the technical details, what was changed in the new revisions, how it all was executed and everything else you wanted to know about the JT-NM Tested Program, but were too afraid to ask!

 
 

Thursday, October 17, 12:00 pm — 12:45 pm (EDM Stage)

Photo

Electronic Dance Music & DJ Stage: EDJ03 - Waves Product Workshop

Presenter:
Michael Pearson-Adams, Waves - Knoxville, TN, USA

 
 

Thursday, October 17, 12:00 pm — 12:30 pm (Booth 266 (Ex. Fl.))

Audio Builders Workshop Booth Talk: ABT07 - Healing Power of DIY Gear

Presenter:
Buddy Lee Dobberteen

 
 

Thursday, October 17, 12:00 pm — 12:45 pm (Live Production Stage)

Live Production Stage: LS08 - DPA Presents: The Hidden Mic for Broadway and Theater Productions

 
 

Thursday, October 17, 12:00 pm — 1:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM11 - Chris Lord-Alge

Presenter:
Chris Lord-Alge, Mix LA - Los Angeles, CA, USA

 
 

Thursday, October 17, 12:00 pm — 12:30 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP16 - NMOS - A General Overview of the Current State

Presenter:
Rick Seegull, Riedel Communications - Burbank, CA, USA

Overview of the current state of NMOS (Automated Discovery and Connection Management of ST 2110 Media Devices), including a look into some of the security aspects, and a short discussion on how Riedel is applying NMOS functionality into its product line.

 
 

Thursday, October 17, 12:15 pm — 1:30 pm (Demo rooms)

Product Development: PD08 - Vendor Event 2: Analog Devices

Demo Room 2D03

Analog Devices’ A2B digital audio bus is a new way to distribute bi-directional multi-channel audio. Previously only available to tier 1 automotive customers, A2B is now being released for the broad market for pro audio applications such as conference rooms, studio monitoring, rack-to-rack and board-to-board digital communication, and more. This session will feature a live, interactive demonstration of the A2B topology discussed in PD07 and will focus on a real-time configuration of multiple microphones and speakers utilizing the SigmaStudio development/deployment tools.

 
 

Thursday, October 17, 12:30 pm — 1:00 pm (Booth 266 (Ex. Fl.))

Audio Builders Workshop Booth Talk: ABT08 - Troubleshooting by Soldering and Building

Presenter:
Joyce Lieberman

 
 

Thursday, October 17, 12:30 pm — 1:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP17 - NMOS Convergence

Presenter:
Jeff Berryman, OCA Alliance

NMOS Convergence is a multi-organization collaboration for compatibly evolving the current NMOS specification set into a suite of formal public networking standards suitable for long-term strategic use. Initial version release is planned for April 2021 timeframe. The project is based on the pioneering set of NMOS specifications created by the AMWA NMOS team.

 
 

Thursday, October 17, 1:00 pm — 1:45 pm (EDM Stage)

Photo

Electronic Dance Music & DJ Stage: Studio DMI Presents Luca Pretolesi Masterclass

Presenter:
Luca Pretolesi, Studio DMI - Las Vegas, NV, USA

 
 

Thursday, October 17, 1:00 pm — 1:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS08 - API Presents The Case for Analog

Presenters:
Joe Chiccarelli, Producer, mixer, engineer - Boston, MA, USA
Daniel Schlett, Strange Weather Studio - Brooklyn, NY, USA
Dave Trumfio, Gold-Diggers Sound - Los Angeles, CA, USA

Join API for a spirited panel discussion featuring Grammy Award-winning Engineer/Producers Daniel Schlett (Strange Weather studio, Brooklyn, NY), Joe Chiccarelli, and Dave Trumfio (Gold-Diggers Sound, Los Angeles, CA)

 
 

Thursday, October 17, 1:00 pm — 2:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM12 - Tom Lord-Alge

Presenter:
Tom Lord-Alge, SPANK Studios - South Beach, FL, USA

 
 

Thursday, October 17, 1:30 pm — 2:00 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT09 - Learn to Solder

Presenter:
Bob Katz, Digital Domain Mastering - Orlando, FL, USA

 
 

Thursday, October 17, 1:30 pm — 2:30 pm (South Concourse A)

Student / Career: SC10 - MATLAB Plugin AES Student Competition - Open Demos

The shortlisted finalists participating in the 2019 edition of the MATLAB Plugin AES Student Competition will provide live interactive demonstrations and answer questions on their MATLAB-based VST plugins.
To learn more about the competition visit aes.org/students/awards/mpsc/. This session follows up and complements the presentation-style showcase event taking place earlier on the day.

The following submissions were chosen by the judges to be presented in front of the audience. These projects are considered to win cash and software prizes. Meritorious awards are determined here and will be presented at the closing Student Delegate Assembly Meeting (SDA-2). Learn more about this competition at aes.org/students/awards>.

• Edward Ly, "Inner Space," University of Aizu
• Domenico Andrea Giliberti, Festim Iseini, Nicola Ignazio Pelagalli, Alessandro Terenzi, "BEStEx - Bass Enhancer Stereo Expander," Universita Politecnica delle Marche
• Christian Steinmetz, "flowEQ," Universitat Pompeu Fabra
• Sean Newell, "Shift Drive," Belmont University
• John Kolar, "Dynamizer: Amplitude Modulator & Audio Effects Plugin," West Virginia University
• Michael Nuzzo, "Spectrum Pixelator," University of Massachusetts-Lowell
Visit the booths at this time to try the proposed audio effects first-hand and to gain additional insight into the technology behind these student projects.

 
 

Thursday, October 17, 1:45 pm — 2:45 pm (1E15+16)

Photo

Special Event: SE06 - Lunchtime Keynote: Steve Jordan

Presenter:
Steve Jordan, Steve Jordan Recording - New York, NY, USA

The title of Steve Jordan's lunchtime keynote is "The Love of Recording."

 
 

Thursday, October 17, 2:00 pm — 3:00 pm (EDM Stage)

Electronic Dance Music & DJ Stage: EDJ04 - Allen & Heath REELOP, XONE & HERCULES Presents:

Presenter:
Jamie Thompson

Bridging the Gap between Home Production and the Stage

Come learn what you will need to take your home music production to the stage and perform using tools like DJ mixers and midi controllers, as well as software programs like Ableton Live and Traktor Pro. There will be a live demonstration using all of these tools along with a Q&A segment to answer any questions you may have.

 
 

Thursday, October 17, 2:00 pm — 2:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS09 - Genelec Presents: Navigating the Circles of Confusion. Monitoring in the Digital Age

Presenters:
Will Eggleston, Technical Marketing USA Genelec
Aki Mäkivirta, Genelec Oy - Iisalmi, Finland

Join Genelec R&D Director, Aki Makivirta and Will Eggleston, Technical Marketing USA as they discuss the obstacles facing modern production and monitoring environments, from stereo to immersive.

 
 

Thursday, October 17, 2:00 pm — 2:45 pm (Live Production Stage)

Live Production Stage: LS09 - Theatrical Sound Designers and Composers Association Panel

Presenters:
Ien DeNio
Sam Kusnetz
Beth Lake
Emma Wilk

A presentation from AES and TSDCA on different working situations for theatre sound designers, from Broadway to off-Broadway to regional to storefront/low budget. The panel consists of Ien DeNio, Sam Kusnetz, Beth Lake, and Emma Wilk, all of whom are working designers and associates in and around New York City. The panelists are also members of TSDCA - the Theatrical Sound Designers and Composers Association - a national organization working to further the work and concerns of Sound Designers and Composers within the theatrical community.

 
 

Thursday, October 17, 2:00 pm — 2:30 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT10 - Basic SigmaStudio and the Analog Devices ADAU1701 SigmaDSP

Presenter:
David Thibodeau, Analog Devices - Wilmington, MA, USA

 
 

Thursday, October 17, 2:00 pm — 3:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM13 - Tchad Blake

Presenter:
Tchad Blake, Real World Studios - UK

 
 

Thursday, October 17, 2:00 pm — 2:30 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP18 - Bolero Wireless Intercom System in AES67 Networks

Presenter:
Rick Seegull, Riedel Communications - Burbank, CA, USA

Riedel Communications will provide an overview of their Bolero wireless intercom system, including the newest AES67 mode that facilitates the distribution of antennas over AES67 networks. Then, after a review of how to assemble and configure a Bolero standalone system, two people will be randomly selected from the audience to compete to see who can configure a system in the shortest amount of time.

 
 

Thursday, October 17, 2:30 pm — 3:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP19 - Designing with Dante and AES67/SMPTE ST 2110

Presenter:
Patrick Killianey, Audinate - Buena Park, CA, USA

In September 2019, Audinate added support for SMPTE ST 2110 in Dante devices. In this presentation, Audinate will share its vision on how Dante and AES67/SMPTE ST 2110 can be used together in a modern studio, including Dante Domain Manager. A tour of Dante’s managed and unmanaged integration with open standards will lead to a discussion of the need for network security, Layer 3 traversal, and fault monitoring in real time. All will be laid out with practical diagrams of broadcast studios from location sound, to small studios and large properties for practical application.

 
 

Thursday, October 17, 3:00 pm — 4:00 pm (1E15+16)

Special Event: SE07 - Triple Threat: The Art, Production & Technology of Making Music

Moderator:
Paul Verna, Paul Verna Media - New York, NY, USA
Presenters:
Danny Kortchmar, Legendary GRAMMY nominated guitarist, songwriter and producer (Jackson Browne, Don Henley, James Taylor) - New York, NY, USA
Steve Jordan, Steve Jordan Recording - New York, NY, USA

With a background of dozens of platinum record credits as a musician, artist, songwriter, producer and engineer, Danny Kortchmar sits down to discuss all aspects of making music both creatively and technically. From going to top commercial studios of the world, to building his own home studio to performing on stage with some of the top artists in the world, Kortchmar shares his secrets and experiences, digital and analog, then and now.

 
 

Thursday, October 17, 3:00 pm — 3:45 pm (Live Production Stage)

Live Production Stage: LS10 - Shure Presents: Day in the Life of a RF Coordinator

 
 

Thursday, October 17, 3:00 pm — 3:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS10 - The Technology behind Grammy-Winning Records - Michael Brauer and Igor Levin on the Art and Science of Pro Audio

Presenters:
Michael Brauer, Michael Brauer - New York, NY, USA
Igor Levin, Antelope Audio - New York, NY, USA

Two industry greats, Grammy Award-winning producer Michael Brauer and Antelope Audio founder Igor Levin, take the stage to share the secrets of their mutual success. Michael will expose his unique approach to the art of making hit records, revealing the role of legendary Antelope Audio equipment like the Atomic Reference Clock in his achievements. Igor will reflect on innovation and ingenuity as the driving forces behind his challenging products which showcase original developments like 6-transistor preamps, custom controllers, DSP + FPGA FX hardware and Acoustically Focused Clocking. In a dialog where art and science intertwine, expect to earn formidable insight and know-how from two consummate professionals whose work inevitably moves the music industry forward.

 
 

Thursday, October 17, 3:00 pm — 3:30 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT11 - Reduce, Reuse, Recycle: An Approach to Repurposing Broken Gear in DIY Builds

Presenter:
Jason Bitner, Traffic Entertainment Group - Somerville, MA, USA

 
 

Thursday, October 17, 3:00 pm — 4:30 pm (South Concourse A)

Poster: P12 - Posters: Room Acoustics

P12-1 Transparent Office Screens Based on Microperforated FoilKrzysztof Brawata, Gorycki&Sznyterman Sp. z o.o. - Cracow, Poland; Katarzyna Baruch, Gorycki&Sznyterman Sp. z o.o. - Cracow, Poland; Tadeusz Kamisinski, AGH University of Science and Technology - Cracow, Poland; Bartlomiej Chojnacki, AGH University of Science and Technology - Cracow, Poland; Mega-Acoustic - Kepno, Poland
In recent years, providing comfortable working conditions in open office spaces has become a growing challenge. The ever-increasing demand for office work implies the emergence of ever new spaces and the need to use available space, which generates the need for proper interior design. There are many acoustic solutions available on the market that support the acoustic comfort in office spaces by ensuring appropriate levels of privacy and low levels of acoustic background. One of such solutions are desktop screens, which divide employees' space. These solutions are based mainly on sound absorbing materials, i.e., mineral wool, felt, as well as sound insulating ones, such as glass or MDF. The article presents methods of using microperforated foils for building acoustic screens. The influence of dimensions and parameters of microperforated foil were examined. The method of its assembly as well as the use of layered systems made of microperforated foil and sound insulating material were also considered in this paper.
Convention Paper 10282 (Purchase now)

P12-2 A Novel Spatial Impulse Response Capture Technique for Realistic Artificial Reverberation in the 22.2 Multichannel Audio FormatJack Kelly, McGill University - Montreal, QC, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Wieslaw Woszczyk, McGill University - Montreal, QC, Canada
As immersive media content and technology begin to enter the marketplace, the need for truly immersive spatial reverberation tools takes on a renewed significance. A novel spatial impulse response capture technique optimized for the 22.2 multichannel audio format is presented. The proposed technique seeks to offer a path for engineers who are interested in creating three-dimensional spatial reverberation through convolution. Its design is informed by three-dimensional microphone techniques for the channel-based capture of acoustic music. A technical description of the measurement system used is given. The processes by which the spatial impulse responses are captured and rendered, including deconvolution and loudness normalization, are described. Three venues that have been measured using the proposed technique are presented. Preliminary listening sessions suggest that the array is capable of delivering a convincing three-dimensional reproduction of several acoustic spaces with a high degree of fidelity. Future research into the perception of realism in spatial reverberation for immersive music production is discussed.
Convention Paper 10283 (Purchase now)

P12-3 Impulse Response Simulation of a Small Room and in situ Measurements ValidationDaniel Núñez-Solano, University of Las Américas - Quito, Ecuador; Virginia Puyana-Romero, University of Las Américas - Quito, Ecuador; Cristian Ordóñez-Andrade, University of Las Américas - Quito, Ecuador; Luis Bravo-Moncayo, Universidad de Las Américas - Quito, Ecuador; Christiam Garzón-Pico, Universidad de Las Américas - Quito, Ecuador
The study of reverberation time in room acoustics presents certain drawbacks when dealing with small spaces. In order to reduce the inaccuracies due to the lack of space for placing measurement devices, finite element methods become a good alternative to support measurement results or to predict the reverberation time on the bases of calculating impulse responses. This paper presents a comparison of the reverberation time obtained by means of in situ and simulated impulse responses. The impulse response is simulated using time-domain finite elements methods. The used room for measurements and simulations is a control room of Universidad de Las Americas. Results show a measured mean absolute error of 0.04 s compared to the computed reverberation time.
Convention Paper 10284 (Purchase now)

P12-4 Calculation of Directivity Patterns from Spherical Microphone Array RecordingsCarlotta Anemüller, International Audio Laboratories Erlangen - Erlangen, Germany; Jürgen Herre, International Audio Laboratories Erlangen - Erlangen, Germany; Fraunhofer IIS - Erlangen, Germany
Taking into account the direction-dependent radiation of natural sound sources (such as musical instruments) can help to enhance auralization processing and thus improves the plausibility of simulated acoustical environments as, e.g., found in virtual reality (VR) systems. In order to quantify this direction-dependent behavior, usually so-called directivity patterns are used. This paper investigates two different methods that can be used to calculate directivity patterns from spherical microphone array recordings. A comparison between both calculation methods is performed based on the resulting directivity patterns. Furthermore, the directivity patterns of several musical instruments are analyzed and important measures are extracted. For all calculations, the publicly available anechoic microphone array measurements database recorded at the Technical University Berlin (TU Berlin) was used.
Convention Paper 10285 (Purchase now)

 
 

Thursday, October 17, 3:00 pm — 4:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM14 - Jack Joseph Puig

Presenter:
Jack Joseph Puig, Record Executive/Producer/Mixer - Hollywood, CA, USA

 
 

Thursday, October 17, 3:00 pm — 3:30 pm (IP Pavilion Theater)

AoIP Pavilion: AIP20 - Deploying SMPTE ST 2110 in a Distributed Campus System

Presenters:
Cassidy Lee Phillips, Imagine Communications - Plano, TX, USA
Tony Pearson, North Carolina State University

When NC State University (NCSU) was looking to upgrade the Pro-AV gear driving its Distance Education and Learning Technology Applications (DELTA) program, they decided to future-proof their infrastructure with a standards-based approach to IP, while continuing to leverage existing SDI gear. Using SMPTE ST 2110, NCSU significantly multiplied the capacity of their fiber infrastructure — where four single-mode fiber strands had supported four HD video channels, now just two fiber strands deliver 32 bidirectional HD channels.

This presentation will share the real-world experiences and lessons learned from adopting AES67 and ST 2110 to successfully deploy a first-of-its-kind inter-campus Pro-AV system.

 
 

Thursday, October 17, 3:30 pm — 4:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP21 - Technical: Synchronization & Alignment (ST 2110 / AES67)

Presenter:
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany

A deep dive into the secrets behind timing & synchronization of AES67 & ST 2110 streams. The meaning of PTP, media clocks, RTP, synchronization parameters (SDP) and the magic of stream alignment will be unveiled in this compact presentation.

 
 

Thursday, October 17, 4:00 pm — 4:45 pm (EDM Stage)

Electronic Dance Music & DJ Stage: EDJ05 - iZotope Presents: Mastering EDM

Presenter:
Alex Psaroudakis, Alex Psaroudakis Mastering - Brooklyn, NY, USA

• The dilema between detrimental loudness vs efficient masters on loud PA playback for clubs and festivals
• Gain staging to maintain punch and integrity for electronic music mastering
• Choosing limiters for electronic music
• The importance of understanding the aesthetics of all of the different electronic music and which process will serve each best

 
 

Thursday, October 17, 4:00 pm — 4:45 pm (Live Production Stage)

Live Production Stage: LS11 - Object Based Audio for Musicals – Case Study: Broadway Bounty Hunter

Presenters:
Cody Spencer
Jesse Stevens, L-Acoustics

Sponsored by L Acoustics

Sound Designer Cody Spencer joins the panel for a case study of his work on the recent Joe Iconis musical “Broadway Bounty Hunter.” Spencer chose an object-based approach to the sound design, which allowed for increased clarity, control, and new creative possibilities.

 
 

Thursday, October 17, 4:00 pm — 4:45 pm (Recording Stage)

Photo

Project Studio Expo Recording Stage: RS11 - When Loud Is Not Loud: What You need to Know About Loudness Measurement today

Moderator:
Alex Kosiorek, Central Sound at Arizona PBS - Phoenix, AZ, USA; Arizona State University - Phoenix, AZ, USA

With streaming dominating the music listening landscape, it is time to revisit both what loudness actually is and how to manage it. Companies such as Apple, YouTube, Spotify and others each have their own measurement standards and loudness targets. Whether it is music or spoken word (such as podcasts), care is needed to preserve the artistic intent of the content’s creators. It is critical that producers, recording, mixing and mastering engineers understand what truly is at stake, and how to read, measure and manage the loudness of audio files. Join representatives from four highly regarded audio tech companies who will inform and enlighten about the proper use of today’s loudness meters and measurement tools.

 
 

Thursday, October 17, 4:00 pm — 5:00 pm (Mix with the Masters Workshop Stage)

AES Mix with the Masters Workshop: MM15 - Leslie Brathwaite

Presenter:
Leslie Brathwaite

 
 

Thursday, October 17, 4:00 pm — 4:30 pm (IP Pavilion Theater)

AoIP Pavilion: AIP22 - Audio Meta Data Transport

Presenter:
Kent Terry, Dolby Laboratories Inc. - San Francisco, CA, USA

Audio metadata, particularly dynamic, time varying audio metadata, is now a requirement for many audio applications. As AoIP systems use IP connections many possibilities for metadata transport exist, however standards in many areas are lacking which can lead to multiple solutions and market fragmentation. This session will discuss current and emerging IP based audio metadata transport standards that support existing AoIP standards and allow interoperability between AoIP systems and devices. Standards relevant to the ST 2110 suite and AES70 will be covered.

 
 

Thursday, October 17, 4:30 pm — 5:30 pm (1E15+16)

Special Event: AR01 - Long Term Preservation of Audio Assets

Moderator:
Jessica Thompson, Jessica Thompson Audio - Berkeley, CA, USA
Panelists:
Jeff Balding, NARAS P&E Wing
Rob Friedrich, Library of Congress
Jamie Howarth, Plangent Processes - Nantucket, MA, USA
Bob Koszela, Iron Mountain Entertainment Services - Boyers, PA, USA
Pat Kraus, UMG
Cheryl Pawelski, Omnivore Records
Toby Seay, Drexel University - Philadelphia, PA, USA

Throughout the history of the recorded music industry, masters have burned, been lost in floods, been mislabeled and misfiled, neglected, forgotten, even systematically destroyed to salvage the raw materials. This panel is an opportunity to learn from the past and move the conversation forward, addressing current challenges with long term preservation of audio assets. Beyond rehashing well-established best practices, panelists will discuss barriers to preservation including technical hurdles, cost, long term storage, deteriorating media, maintaining legacy playback equipment, legalities, and the very simple fact that we cannot and will not save everything.

 
 

Thursday, October 17, 4:30 pm — 5:30 pm (1E10)

Special Event: SE08 - DTVAG Forum: Audio for a New Television Landscape

Presenters:
Roger Charlesworth, DTV Audio Group - New York, NY, USA
Tim Carroll, Dolby Laboratories - San Francisco, CA, USA
Scott Kramer, Netflix - Los Angeles, CA, USA
Sean Richardson, Starz Entertainment - Denver, CO, USA
Tom Sahara, Turner Sports Vice President, Operations and Technology, Turner Sports - Atlanta, GA, USA
Jim Starzynski, NBCUniversal - New York, NY, USA; ATSC Group - Washington D.C.

We appear to have now entered the post-television, television era. In a few short years, the entire nature of television distribution and consumption has changed so significantly as to be unrecognizable. Ubiquitous and inexpensive wireless and broadband networking; smart TVs and mobile devices; and massively-scalable cloud computing have created a completely new entertainment distribution system, upending the traditional broadcast model, and changing viewing habits forever. The transition from “hardwired” to “virtualized” distribution has expanded the possibilities for television audio innovation, further raising the bar on ultimate quality of premium viewing experiences, while presenting creative challenges in translating these experiences to an ever-widening range of devices.

The advent of affordable consumer 4K and HDR on smart TVs and other devices has radically transformed the at-home viewing experience. Combined with the story-telling power of premium episodic content and streaming movies, upscale home viewing has supplanted cinema as the ultimate Hollywood entertainment consumption experience. Audio has been front and center in this transition as more and more premium content becomes available in Dolby Atmos immersive surround.

The dramatic resurgence of surround sound, and emerging interest in next-generation enhanced-surround, is built on the ability to virtualize surround presentations over a growing range of devices and environments including increasingly sophisticated immersive-audio-capable soundbars and TV sets, alongside enhanced surround virtualizing headphones, earbuds and mobile devices.

Please join us for a discussion of how the post-television era is re-inventing television sound.

The DTV Audio Group Forum at AES is produced in association with the Sports Video Group and is sponsored by: Brainstorm, Calrec, Dale Pro Audio, Dolby, Lawo, Sanken, Shure

“The rule book for television distribution is being completely re-written. The migration away from traditional broadcasting to IP delivery continues to accelerate the uptake of advanced encoding solutions and sophisticated audio services. This transition creates new challenges in providing quality and consistency across an ever-widening range of devices and environments. Please join the DTVAG for a discussion of these and other important television audio issues.”

~ Roger Charlesworth, Executive Director, DTV Audio Group

 
 

Thursday, October 17, 4:30 pm — 5:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP23 - Innovation through Open Technologies

Presenter:
Nestor Amaya, Ross Video / COVELOZ Technologies - Ottawa, ON, Canada

Where would today's Cloud, smartphones and embedded IoT engines be without Linux? And could we have gotten here with a proprietary OS such as Windows? Similarly, this presentation argues that Pro Audio/Video applications need open technologies to serve the unique needs of their users. Our markets need a fully open stack, beyond just AES67/ST 2110 transport, to reap the benefits of our investment in A/V networking technologies. Learn how NMOS, EmBER+ and similar open technologies enable you to innovate and serve your users better when compared to proprietary control systems.

 
 

Thursday, October 17, 5:00 pm — 5:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS12 - Beyond Basics in Vocal Recording & Production: Engineering with Focus on the Creative Process

Presenters:
Neal Cappellino, Multiple Grammy-winning Engineer
Mike Picotte, Producer Engineer Sweetwater Sound - Fort Wayne, IN, USA
Jonathan Pines, Rupert Neve Designs - Wimberley, TX, USA; sE Electronics

Master the basics in order to facilitate the Creative Process that’s happening on the other side of the glass. Grammy award winning engineer Neal Cappellino and Producer-Engineers Mike Picotte and Jonathan Pines will detail five aspects of vocal recording & production they see as foundational skills Engineers must be fluid with in order to be effective creative collaborators.

iTeam = Interpersonal, Technical, Environmental, Administrative, and Musical; skills that combine to foster a supportive and successful collaboration.

Sponsored by: Sweetwater Sound, sE Electronics, and Rupert Neve Designs

 
 

Thursday, October 17, 5:00 pm — 5:45 pm (Live Production Stage)

Live Production Stage: LS12 - DiGiCo Presents: Broadway to Black Box Theatre Mixing

Moderator:
Matt Larson, DiGiCo- Group One Limited National Sales Manager - Farmingdale, NY
Panelists:
Lew Mead, AutographA2D Managing Director, NY, USA
Dan Page

Benefits of DiGiCo Theater software and workflow

Brief overview and backstory that illustrates how the pioneering theatre software was conceived and developed by Autograph and DiGiCo.
We will demonstrate the T-Software and how it’s implemented in musical theatre productions. This session will cover an in-depth introduction to all theatre-specific features and how they can be used in the demanding world of theatre sound design and discuss controlling other external systems via the DiGiCo platform via protocols as OSC, GPIO, MIDI and the discuss what the future may bring in Sound Design.

Attendees will leave understanding the additional tools that the T-Software can provide for fast & efficient programing of a simple to complex show.

 
 

Thursday, October 17, 5:00 pm — 6:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM16 - Andy Wallace

Presenter:
Andy Wallace, Recording engineer - USA

 
 

Thursday, October 17, 6:30 pm — 8:00 pm (1E15+16)

Photo

Special Event: SE09 - Heyser Lecture

Presenter:
Louis Fielder, Retired - Millbrae, CA, USA

The Richard C. Heyser distinguished lecturer for the 147th AES Convention is Louis D. Fielder.

Psychoacoustics Applied to Dynamic-Range and Nonlinear-Distortion Assessment
The psychoacoustics of noise detection, measurements of noise in the digital-audio recording storage reproduction chain, and measurements of peak-acoustic pressures in music performances are combined to determine the requirements for noise-free reproduction of music. It is found that the required ratio between the maximum reproduction levels and the perceived audibility of noise can be as much as 124 decibels. When more practical circumstances are considered, this requirement is shown to drop to more feasible values. Next, the concept of auditory masking is introduced to allow for the assessment of nonlinear distortions in digital-audio conversion systems operating at low signal levels and then several examples of digital-audio conversion systems are examined. Finally, an expanded use of masking and a model to calculate a total nonlinear-distortion audibility number are used to determine the audibility of nonlinear distortions in headphones when driven by sine-wave signals at or below 500 Hz. Examples of headphone distortion assessment are examined and extension of this measurement technique to low-frequency loudspeaker evaluation is also discussed.

 
 

Friday, October 18, 9:00 am — 10:30 am (South Concourse A)

Engineering Brief: EB2 - Posters: Applications in Audio

EB2-1 WithdrawnN/A


EB2-2 A Latency Measurement Method for Networked Music PerformancesRobert Hupke, Leibniz Universität Hannover - Hannover, Germany; Sripathi Sridhar, New York University - New York, NY, USA; Andrea Genovese, New York University - New York, NY, USA; Marcel Nophut, Leibniz Universität Hannover - Hannover, Germany; Stephan Preihs, Leibniz Universität Hannover - Hannover, Germany; Tom Beyer, New York University - New York, NY, USA; Agnieszka Roginska, New York University - New York, NY, USA; Jürgen Peissig, Leibniz Universität Hannover - Hannover, Germany
The New York University and the Leibniz University Hannover are working on future immersive Networked Music Performances. One of the biggest challenges of audio data transmission over IP-based networks is latency, which can affect the interplay of the participants. In this contribution, two metronomes, utilizing the Global Positioning System to generate a globally synchronized click signal, were used as a tool to determine delay times in the data transmission between both universities with high precision. The aim of this ?rst study is to validate the proposed method by obtaining insights into transmission latency as well as latency ?uctuations and asymmetries. This work also serves as baseline for future studies and helps to establish an effective connection between the two institutions.
Engineering Brief 529 (Download now)

EB2-3 An Investigation into the Effectiveness of Room Adaptation Systems: Listening Test ResultsPei Yu, Nanjing University - Nanjing, China; Ziyun Liu, Nanjing University - Nanjing, China; Shufeng Zhang, Nanjing University - Nanjing, China; Yong Shen, Nanjing University - Nanjing, Jiangsu Province, China
Loudspeaker-room interactions are well known for affecting the perceived sound quality of low frequencies. To solve this problem, different room adaptation systems for adapting a loudspeaker to its acoustic environment have been developed. In this study two listening tests were performed to assess the effectiveness of four different room adaptation systems under different circumstances. The factors investigated include the listening room, loudspeaker, listening position, and listener. The results indicate that listeners’ preference for different adaptation systems is affected by the specific acoustic environment. It was found that the adaptation system based on acoustic power measurement proved to be more preferred, also with stable performance.
Engineering Brief 530 (Download now)

EB2-4 Evaluating Four Variants of Sine Sweep Techniques for Their Resilience to Noise in Room Acoustic MeasurementsEric Segerstrom, Rensselaer Polytechnic Institute - Troy, NY, USA; Ming-Lun Lee, University of Rochester - Rochester, NY, USA; Steve Philbert, University of Rochester - Rochester, NY, USA
The sine sweep is one of the most effective methods for measuring room impulse responses; however, ambient room noise or unpredictable impulsive noises can negatively affect the quality of the measurement. This study evaluates four different variants of sine sweeps techniques for their resilience to noise when used as an excitation signal in room impulse response measurements: linear, exponential, noise whitened, and minimum noise. The result shows that in a pseudo-anechoic environment, exponential and linear sine sweeps are most resilient to impulsive noise among the four sweeps, while none of the evaluated sine sweeps are resilient to impulsive noise in an acoustically untreated room. Additionally, it is shown that minimum noise sine sweeps are most resilient to ambient noise.
Engineering Brief 531 (Download now)

EB2-5 Perceptually Affecting Electrical Properties of Headphone Cable – Factor Hunting ApproachAkihiko Yoneya, Nagoya Institute of Technology - Nagoya, Aichi-pref., Japan
An approach to find the cause of the perceptual sound quality change by headphone cable has been proposed. This is a method of verifying the validity of the selected candidate by selecting candidate factors from the measurement results, simulating them by digital signal processing, and evaluating the simulated sounds by audition. In the headphone cable, it was found that the factor is that the inductance changes due to the flowing current. It has become clear from the experimental results that changes in transfer characteristics are very sensitively affecting the perceptual sound quality.
Engineering Brief 532 (Download now)

EB2-6 An Investigation into the Location and Number of Microphone Measurements Necessary for Efficient Active Control of Low-Frequency Sound Fields in Listening RoomsTom Bell, Bowers & Wilkins - Southwater, West Sussex, UK; University of Southampton - Southampton, Hampshire, UK; Filippo Maria Fazi, University of Southampton - Southampton, Hampshire, UK
The purpose of this investigation is to understand the minimum number of control microphone measurements needed and their optimal placement to achieve effective active control of the low-frequency sound field over a listening area in a rectangular room. An analytical method was used to model the transfer functions the loudspeakers and a 3-dimensional array of 75 virtual microphones. A least-squares approach was used to create one filter per sound source from a varying number and arrangement of these measurements, with the goal to minimize the error between the reproduced sound field and the target. The investigation shows once enough measurements are taken there is a clear diminishing return in the effectiveness of the filters versus the number of measurements needed. [Presentation only; not in E-Library]

EB2-7 Measuring Speech Intelligibility Using Head-Oriented Binaural Room Impulse ResponsesAllison Lam, Tufts University - Medford, MA, USA; Ming-Lun Lee, University of Rochester - Rochester, NY, USA; Steve Philbert, University of Rochester - Rochester, NY, USA
Speech intelligibility/speech clarity is important in any setting in which information is verbally communicated. More specifically, a high level of speech intelligibility is crucial in classrooms to allow teachers to effectively communicate with their students. Given the importance of speech intelligibility in learning environments, several studies have analyzed how accurately the standard method of measuring clarity predicts the level of speech intelligibility in a room. In the context of speech measurements, C50 has been widely used to measure clarity. Instead of using a standard omnidirectional microphone to record room impulse responses for clarity measurements, this study examines the effectiveness of room impulse responses measured with a binaural dummy head. The data collected for this experiment show that C50 measurements differ between the left and right channels by varying amounts based on the dummy head’s position in the room and head orientation. To further investigate the effectiveness of binaural C50 measurements in comparison to the effectiveness of omnidirectional C50 measurements, this research explores the results of psychoacoustic testing to determine which recording method more consistently predicts human speech intelligibility. These results, combined with qualitative observations, predict how precisely acousticians are able to measure C50.
Engineering Brief 533 (Download now)

EB2-8 Compensation Filters for Excess Exciter Excursion on Flat-Panel LoudspeakersDavid Anderson, University of Pittsburgh - Pittsburgh, PA, USA
Inertial exciters are used to actuate a surface into bending vibration, producing sound, but often have a high-Q resonance that can cause the exciter magnet to displace enough to contact the bending panel. The magnet contacting the panel can cause distortion and possibly even damage to the exciter or panel while having a minimal contribution to acoustic output. A method is outlined for deriving a digital biquad filter to cancel out the excessive displacement of the magnet based on measurements of the exciter’s resonant frequency and Q-factor. Measurements of exciter and panel displacement demonstrate that an applied filter reduces magnet excursion by 20 dB at the resonant frequency.
Engineering Brief 534 (Download now)

 
 

Friday, October 18, 9:00 am — 5:00 pm (SDA Booth)

Photo

Student / Career: SC12 - Resume Review (for Students, Recent Grads, and Young Professionals)

Moderator:
Alex Kosiorek, Central Sound at Arizona PBS - Phoenix, AZ, USA; Arizona State University - Phoenix, AZ, USA

Students, recent graduates and young professionals… Often your resume is an employer’s first impression of you. Naturally, you want to make a good one. Employer’s often use job search websites to search for candidates. Some use automated software to scan your resume and in some cases, your LinkedIn/social media profiles as well. Questions may arise regarding formatting, length, keywords and phrases so it shows up in searches and lands on the desk of the hiring manager. No matter how refined your resume may be, it is always good to have someone else review your materials. Receive a one-on-one 20-25 minute review of your resume from a hiring manager who is in the audio engineering business. Plus, if time allows, your cover letter and online presence will be reviewed as well.

Sign up at the student (SDA) booth immediately upon arrival. For those who would like to have your resume reviewed on Wednesday, October 17th prior to SDA-1, please email the request to: aesresumereview@outlook.com. You may be requested to upload your resume prior to your appointment for review. Uploaded resumes will only be seen by the moderator and will be deleted at the conclusion of the 147th Pro Audio Convention.

This review will take place during the duration of the convention by appointment only

 
 

Friday, October 18, 9:30 am — 11:00 am (1E15+16)

Special Event: H06 - African Americans in Audio

Moderator:
Leslie Gaston-Bird, Mix Messiah Productions - Brighton, UK; Audio Engineering Society - London, UK
Panelists:
Prince Charles Alexander, Berklee College of Music - Boston, MA, USA
Abhita Austin, Audio Engineer-Producer and Founder of The Creator’s Suite
James Henry, recording engineer/producer and audio educator
Ebonie Smith, Atlantic Records/Hamilton Cast Album
Paul "Willie Green" Womack, Willie Green Music - Brooklyn, NY, USA
Bobby Wright, Hampton University

African Americans have contributed to the popular music recording industry in a number of ways, and although their achievements are visible, their representation at technical conferences and on the exhibit floor is less so. Join our panel of renowned engineers, performers, and educators for a discussion on how African Americans have been blazing trails behind the scenes in the recording industry and how we can best engage and welcome them to access the social and scholarly networks that have benefited us. Topics include a technological pedagogy for hip-hop education and dispelling the stereotype that African-American engineers are only able to work in “Black Music” genres.

Chaired by AES Governor-at-Large Leslie Gaston-Bird, the first African American to sit on the AES Board of Governors, panelists include: James Henry, three time Grammy nominated recording engineer/producer and audio educator; Paul Willie Green Womack, Producer/Engineer and Chair of the Hip-Hop and R&B track of the AES Convention Committee; Prince Charles Alexander, Grammy winning Music Producer/Engineer/Recording Artist and Professor of Music Production and Engineering for Berklee College of Music and Berklee Online; Bobby Wright, Assistant Professor (music, audio engineering) at Hampton University; Abhita Austin, Audio Engineer-Producer and Founder of The Creator’s Suite; and Ebonie Smith, Grammy winning engineer, producer, and singer / songwriter.

 
 

Friday, October 18, 10:00 am — 4:00 pm

Exhibit: Exhibition

Leave plenty of time to walk the exhibits floor so you can learn about the latest products and technologies, scope out your competitors, or simply drool over the latest toys.

 
 

Friday, October 18, 10:00 am — 11:00 am (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM17 - Rafa Sardinia

Presenter:
Rafa Sardina, Fishbone Productions, Inc. - Los Angeles, CA, USA; AfterHours Studios - Los Angeles, CA, USA

 
 

Friday, October 18, 11:00 am — 12:30 pm (South Concourse A)

Engineering Brief: EB3 - Posters: Spatial Audio

EB3-1 Comparing Externalization Between the Neumann KU100 Versus Low Cost DIY Binaural Dummy HeadKelley DiPasquale, SUNY Potsdam, Crane School of Music - Potsdam, NY, USA
Music is usually recorded using traditional microphone techniques. With technology continually advancing, binaural recording has become more popular, that is, a recording where two microphones are used to create a three-dimensional stereo image. Commercially available binaural heads are prohibitively expensive and not practical for use in typical educational environments or for casual use in a home studio. This experiment consisted of gathering recorded stimuli with a homemade binaural head and the Neumann KU 100. The recordings were played back for 34 subjects instructed to rate the level of externalization for each example. The study investigates whether a homemade binaural head made for under $500 can externalize sound as well as a commercially available binaural head the Neumann KU 100.
Engineering Brief 535 (Download now)

EB3-2 SALTE Pt. 1: A Virtual Reality Tool for Streamlined and Standardized Spatial Audio Listening TestsDaniel Johnston, University of York - York, UK; Benjamin Tsui, University of York - York, UK; Gavin Kearney, University of York - York, UK
This paper presents SALTE (Spatial Audio Listening Test Environment), an open-source framework for creating spatial audio perceptual testing within virtual reality (VR). The framework incorporates standard test paradigms such as MUSHRA, 3GPP TS 26.259 and audio localization. The simplified drag-and-drop user interface facilitates rapid and robust construction of customized VR experimental environments within Unity3D without any prior knowledge of the game engine or the C# coding language. All audio is rendered by the dedicated SALTE audio renderer which is controlled by dynamic participant data sent via Open Sound Control (OSC). Finally, the software is capable of exporting all experimental conditions such as visuals, participant interaction mechanisms, and test parameters allowing for streamlined and standardized comparable data within and in-between organizations.
Engineering Brief 536 (Download now)

EB3-3 SALTE Pt. 2: On the Design of the SALTE Audio Rendering Engine for Spatial Audio Listening Tests in VRTomasz Rudzki, University of York - York, UK; Chris Earnshaw, University of York - York, UK; Damian Murphy, University of York - York, UK; Gavin Kearney, University of York - York, UK
The dedicated audio rendering engine for conducting listening experiments using the SALTE (Spatial Audio Listening Test Environment) open-source virtual reality framework is presented. The renderer can be used for controlled playback of Ambisonic scenes (up to 7th order) over headphones and loudspeakers. Binaural-based Ambisonic rendering facilitates the use of custom HRIRs contained within separate WAV ?les or SOFA ?les as well as head tracking. All parameters of the audio rendering software can be controlled in real-time by the SALTE graphical user interface. This allows for perceptual evaluation of Ambisonic scenes and different decoding schemes using custom HRTFs.
Engineering Brief 537 (Download now)

EB3-4 Mixed Reality Collaborative MusicAndrea Genovese, New York University - New York, NY, USA; Marta Gospodarek, New York University - New York, NY, USA; Agnieszka Roginska, New York University - New York, NY, USA
This work illustrates a virtual collaborative experience between a real-time musician and virtual game characters based on pre-recorded performers. A dancer and percussionists have been recorded with microphones and a motion capture system so that their data can be converted into game avatars able to be reproduced within VR/AR scenes. The live musician was also converted into a virtual character and rendered in VR, and the whole scene was observable by an audience wearing HMDs. The acoustic character between the live and pre-recorded audio was matched in order to blend the music into a cohesive mixed reality scene and address the viewer's expectations set by the real-world elements. Presentation only; not available in E-Library]

EB3-5 WithdrawnN/A


EB3-6 Field Report: Immersive Recording of a Wind Ensemble Using Height Channels and Delay Compensation for a Realistic Playback ExperienceHyunjoung Yang, McGill University - Montréal, QC, Canada; Alexander Dobson, McGill University - Montreal, QC, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada
Practical examples for orchestra recording in stereo or surround were relatively easy to obtain. Whereas, it was found that recording practice in immersive audio is relatively limited. This paper is intended to share the experience of the immersive recording process for a wind orchestra recording at McGill University. There were concerns that needed to be considered before planning the concert recording, problems encountered during the planning, and lastly the solutions to these issues. In conclusion, the discussions about the final result and the approach will be described.
Engineering Brief 538 (Download now)

 
 

Friday, October 18, 11:00 am — 11:45 am (Live Production Stage)

Live Production Stage: LS13 - RF Spectrum Update

Moderator:
Karl Winkler, Lectrosonics - Rio Rancho, NM, USA
Panelists:
Mark Albergo
Ben Escobedo, Shure Market Development & Michael Mason, President of CP Communications
David Missall

We're in it now! The results of the 600 MHz spectrum auction are upon us, as the new owners of the 616-698 MHz range are turning on their services. Join a panel of experts covering these changes, new FCC regulations, and the affects these changes are having on all UHF wireless microphone, intercom, IEM, and IFB users in the core-TV bands.

 
 

Friday, October 18, 11:00 am — 11:45 am (Recording Stage)

Photo

Project Studio Expo Recording Stage: RS13 - Studio DMI Presents Luca Pretolesi Super Session

Presenter:
Luca Pretolesi, Studio DMI - Las Vegas, NV, USA

Immerse yourself in this interactive presentation designed to give attendees a beyond-the-box look at some of Pretolesi’s most impactful mixing and mastering tips. From session recalls, live Q&A, and real-time track reviews Pretolesi will present the same cutting-edge production techniques and hit-making insights that have helped to amplify the biggest artists in the world including; David Guetta, Jason Derulo, J Balvin, Diplo, Lil Jon, DJ Snake, Major Lazer, Steve Aoki, Above & Beyond, Nicki Minaj, Daddy Yankee, and Prince Royce.

 
 

Friday, October 18, 11:00 am — 12:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM18 - Jimmy Douglass

Presenter:
Jimmy Douglass, Engineer, producer - USA

 
 

Friday, October 18, 11:00 am — 11:30 am (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP24 - Reinventing Intercom with SMPTE ST 2110-30

Presenter:
Martin Dyster, The Telos Alliance - Cleveland, OH USA

This presentation looks at the parallels between the emergence of audio over IP standards and the development of a product in the Intercom market sector that has taken full advantage of IP technology.

 
 

Friday, October 18, 11:15 am — 12:15 pm (1E15+16)

Special Event: SE10 - How We Make Music—Crossing the Decades from Analog to Digital

Moderator:
Chris Lord-Alge, Mix LA - Los Angeles, CA, USA
Panelists:
Danny Kortchmar, Legendary GRAMMY nominated guitarist, songwriter and producer (Jackson Browne, Don Henley, James Taylor) - New York, NY, USA
Presenter:
Eddie Kramer, Remark Music Ltd. - Woodland Hills, CA, USA
Panelists:
Tom Lord-Alge, SPANK Studios - South Beach, FL, USA
Dave Way

Hear from top recording engineers and producers techniques they use from analog to digital and how they connect the dots of the past, present, and future of creative technology—how we transaction into Pro Tools. This panel will feature Chris Lord-Alge, Tom Lord-Alge, and others to be announced.

 
 

Friday, October 18, 11:30 am — 1:30 pm (South Concourse B)

Student / Career: SC15 - Education and Career Fair

The combined AES 147th Education and Career Fair will match job seekers with companies and prospective students with schools.
Companies:
Looking for the best and brightest minds in the audio world? No place will have more of them assembled than the 147th Convention of the Audio Engineering Society. Companies are invited to participate in our Education and Career Fair free of charge. This is the perfect chance to identify your ideal new hires! All attendees of the convention, students and professionals alike, are welcome to come visit with representatives from participating companies to find out more about job and internship opportunities in the audio industry. Bring your resume!
Schools:
One of the best reasons to attend AES conventions is the opportunity to make important connections with your fellow educators from around the globe. Academic Institutions offering studies in audio (from short courses to graduate degrees) will be represented in a “table top” session. Information on each school’s respective programs will be made available through displays and academic guidance. There is no charge for schools/institutions to participate. Admission is free and open to all convention attendees.

 
 

Friday, October 18, 11:30 am — 12:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP26 - JT-NM Tested Program - Test Plans and Results

Presenter:
Ievgen Kostiukevych, European Broadcasting Union - Le Grand-Saconnex, Genéve, Switzerland

The JT-NM Tested Program was repeated in August 2019 with the addition of AMWA NMOS/JT-NM TR-1001-1 testing. New revisions of the test plans were produced. What does all this mean to the end customers? The editor and coordinator of the program will explain the reasoning behind, the technical details, what was changed in the new revisions, how it all was executed and everything else you wanted to know about the JT-NM Tested Program, but were too afraid to ask!

 
 

Friday, October 18, 12:00 pm — 12:30 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT12 - Get to Know Your Gear with Free Analysis Software

Presenter:
Peterson Goodwyn, DIY Recording Equipment - Philadephia, PA, USA

 
 

Friday, October 18, 12:00 pm — 12:45 pm (Recording Stage)

Photo

Project Studio Expo Recording Stage: RS14 - Focusrite Presents: Podcasting: The Fastest Growing Medium in Audio

Presenter:
Dan Hughley, Focusrite - Los Angeles, CA, USA

Join Dan Hughley of Focusrite for this brand-neutral presentation aimed at answering questions that are frequently asked by new podcasters. After learning what a podcast is and the history of the medium, you will learn what you need to get started, the importance of high quality audio to your audience, how to keep your show going when fatigue sets in, and finish up with some quick tips on how to cost effectively promote your show to gain listeners and subscribers. This promises to be a fun and educational session that you surely should not miss.

 
 

Friday, October 18, 12:00 pm — 12:45 pm (Live Production Stage)

Live Production Stage: LS14 - The 7 Most Common Wireless Mic Mistakes (and What You Can Do about Them)

Moderator:
Karl Winkler, Lectrosonics - Rio Rancho, NM, USA
Panelists:
Christopher Evans, The Benedum Center - Pittsburgh, PA, USA
Jason Glass, Clean Wireless

Anyone who has set up or used a wireless mic system, large or small, has faced many of the same problems. This panel of industry experts will explore the most common issues users bring upon themselves, and provide best practice advice for how to improve your results next time around. The basics of wireless mic technology and how to apply it in the real world will be covered along the way.

 
 

Friday, October 18, 12:00 pm — 1:00 pm (Mix with the Masters Workshop Stage)

AES Mix with the Masters Workshop: MM19 - Young Guru

Presenter:
Young Guru

 
 

Friday, October 18, 12:00 pm — 12:30 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP27 - AES67 and SMPTE ST 2110 - The Vulcan Nerve Pinch to RAVENNA?

Presenter:
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany

When RAVENNA was introduced to the industry back in 2010, several – mostly proprietary – AoIP solutions were existing, but no standard was yet visible. Although RAVENNA is an open technology approach, with its modular architecture fully based on existing and well-accepted standards, people seemed to be confused by the wide choice of competing and non-interoperable solutions. The arrival of AES67 in 2013 made people cheer and pay homage to an AoIP standard they believed would finally render any other solutions obsolete… 4 years later, SMPTE published its suite of ST 2110 documents, with ST 2110-30 (linear PCM audio transport) fully referencing AES67 as its basis. A closer look at these standards reveals that RAVENNA – published some 7 years before the arrival of ST 2110 – was build on exactly the same protocols and functions as AES67 and ST 2110. So, does this mean, RAVENNA is now obsolete, being forced to coma by the AES67/ST 2110 nerve pinch? Andreas Hildebrand, RAVENNA Evangelist at ALC NetworX, Germany, explains why this Vulcan nerve pinch does not have any effect for RAVENNA at all…

 
 

Friday, October 18, 12:15 pm — 1:30 pm (Demo rooms)

Product Development: PD13 - Vendor Event 3: Menlo

 
 

Friday, October 18, 12:30 pm — 1:00 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT13 - Eurorack and Plugin Design with Delta Sound Labs

Presenter:
Richard Graham, Delta Sound Labs - Cleveland, OH, USA

 
 

Friday, October 18, 12:30 pm — 1:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP28 - Introduction to AES70

Presenter:
Ethan Wetzell, OCA Alliance - New York, NY, USA

AES70, also known as OCA, is an architecture for system control and connection management of media networks and devices. AES70 is capable of working effectively with all kinds of devices from multiple manufacturers to provide fully interoperable multivendor networks. This session will provide an overview of the AES70 standard, including the current and future objectives of the standard.

 
 

Friday, October 18, 1:00 pm — 1:45 pm (EDM Stage)

Photo

Electronic Dance Music & DJ Stage: EDJ06 - Waves Product Workshop

Presenter:
Michael Pearson-Adams, Waves - Knoxville, TN, USA

 
 

Friday, October 18, 1:00 pm — 1:30 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT14 - Reduce, Reuse, Recycle: An Approach to Repurposing Broken Gear in DIY Builds

Presenter:
Jason Bitner, Traffic Entertainment Group - Somerville, MA, USA

 
 

Friday, October 18, 1:00 pm — 1:45 pm (Live Production Stage)

Live Production Stage: LS15 - Meyer Sound Presents: Live Touring System

 
 

Friday, October 18, 1:00 pm — 1:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS15 - Sylvia Massy—The Secret Ingredients

But what are the secret ingredients for a good session? Is it a certain piece of gear, a certain song, a certain location? Maybe it's all those things, and so much more! In this lecture, Sylvia Massy shares stories about some of her favorite recording adventures, She also reveals her secrets for making each session memorable.

Sponsored by iZotope

 
 

Friday, October 18, 1:00 pm — 2:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM20 - Tchad Blake

Presenter:
Tchad Blake, Real World Studios - UK

 
 

Friday, October 18, 1:30 pm — 2:30 pm (1E15+16)

Special Event: SE11 - Lunchtime Keynote: 1500 or Nothin'

Presenters:
Larrance Dopson
IZ (& Bobby) Avila Brothers, Avila Brothers

Inspiring and Educating the Next Generation of Producers, Engineers, Creators

The world of audio education is changing. For the modern audio student, the “traditional” curriculum is not enough to compete in today’s music industry. Larrance Dopson (GRAMMY Award-winning producer/instrumentalist and CEO of the 1500 or Nothin’ production/songwriting collective), along with GRAMMY Award-winning producer and songwriter IZ Avila (half of The Avila Brothers), discuss what it takes to make it in today’s fast-paced music production world, and how these new needs have led some music educators and mentors to evolve their approaches to prepare their students for what they’ll encounter out in the real world, while inspiring and motivating these students to create their own opportunities.

 
 

Friday, October 18, 2:00 pm — 3:00 pm (EDM Stage)

Electronic Dance Music & DJ Stage: EDJ07 - Allen & Heath REELOP, XONE & HERCULES Presents:

Presenter:
Jamie Thompson

Bridging the Gap between Home Production and the Stage

Come learn what you will need to take your home music production to the stage and perform using tools like DJ mixers and midi controllers, as well as software programs like Ableton Live and Traktor Pro. There will be a live demonstration using all of these tools along with a Q&A segment to answer any questions you may have.

 
 

Friday, October 18, 2:00 pm — 2:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS16 - Produce Like a Pro with Warren Huart

Presenter:
Warren Huart, Produce Like a Pro

 
 

Friday, October 18, 2:00 pm — 2:45 pm (Live Production Stage)

Live Production Stage: LS16 - Yamaha Presents: Mixing Furry Monsters

Presenter:
Chris Prinzivalli, Sesame Street

Multi Emmy Award winning Chris Prinzivalli, Production Mixer of Sesame Street, speaks about audio techniques for everyone's favorite Street.

 
 

Friday, October 18, 2:00 pm — 3:00 pm (Mix with the Masters Workshop Stage)

AES Mix with the Masters Workshop: MM21 - David Kahne

Presenter:
David Kahne

 
 

Friday, October 18, 2:00 pm — 2:30 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP29 - AES67 / ST 2110 / NMOS - An Overview on Current SDO Activities

Presenter:
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany

Update and report on current standardization activities, including AES67, SMPTE ST 2110. Refresh / summary on commonalities and constraints between AES67 and ST 2110. Brief overview on NMOS developments and activities.

 
 

Friday, October 18, 2:30 pm — 4:00 pm (South Concourse B)

Student / Career: SC17 - SPARS Mentoring

Moderator:
Drew Waters, VEVA Sound

This event is especially suited for students, recent graduates, young professionals, and those interested in career advice. Hosted by SPARS in cooperation with the AES Education Committee, career related Q&A sessions will be offered to participants in a speed group mentoring format. A dozen students will interact with 4–5 working professionals in specific audio engineering fields or categories every 20 minutes. Audio engineering fields/categories include gaming, live sound/live recording, audio manufacturer, mastering, sound for picture, and studio production.

 
 

Friday, October 18, 2:30 pm — 3:00 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP30 - Audio Monitoring Solutions for Audio-over-IP

Presenter:
Aki Mäkivirta, Genelec Oy - Iisalmi, Finland

ST 2110 and AES67 have established audio-over-IP as the next generation standard for audio monitoring. This presentation discusses the benefits of using audio-over-IP over traditional audio monitoring methods, and why the change to audio-over-IP is happening with increasing speed. Practical case examples of using audio-over-IP in professional broadcasting as well as in AV install audio applications are presented.

 
 

Friday, October 18, 2:45 pm — 4:15 pm (1E15+16)

Special Event: RP17 - Platinum Latin Engineers & Producers

Chair:
Andres A. Mayo, 360 Music Lab - Buenos Aires, Argentina
Panelists:
Carli Beguerie, Studio Instrument Rentals/Mastering Boutique - New York, NY, USA
Mauricio Gargel, Mauricio Gargel Audio Mastering - Sao Paulo, SP. Brazil
Andres Millan, Diffusion Magazine - Boutique Pro Audio - Bogotá, Cundinamarca, Colombia
Martin Muscatello, 360 Music Lab
Rafa Sardina, Fishbone Productions, Inc. - Los Angeles, CA, USA; AfterHours Studios - Los Angeles, CA, USA
Camilo Silva F., Camilo Silva F. Mastering - Chia, Cundinamarca, Colombia

This Panel gathers every year a selected group of Latin producers and engineers that will present their multi-Grammy Award-winning work and explain in detail how they deal with the ever growing Latin recording industry.

 
 

Friday, October 18, 3:00 pm — 3:45 pm (EDM Stage)

Photo

Electronic Dance Music & DJ Stage: EDJ08 - Studio DMI Presents Luca Pretolesi Masterclass on iZotope Ozone 9

Presenter:
Luca Pretolesi, Studio DMI - Las Vegas, NV, USA

 
 

Friday, October 18, 3:00 pm — 3:30 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT15 - Advanced SigmaStudio Use and the Analog Devices ADAU1452 Family of SigmaDSP

Presenter:
David Thibodeau, Analog Devices - Wilmington, MA, USA

 
 

Friday, October 18, 3:00 pm — 3:45 pm (Live Production Stage)

Live Production Stage: LS17 - Large Scale Festival Sound Systems

 
 

Friday, October 18, 3:00 pm — 3:45 pm (Recording Stage)

Project Studio Expo Recording Stage: RS17 - Listening with Amazon Music HD: High Quality Streaming for the Masses

Presenters:
John Farrey, Amazon Music - Seattle, WA, USA
Jack Rutledge

Streaming music in high definition gives listeners the ability to hear songs the way artists originally recorded them, with all of the emotion, detail, and instrumentation of the original recordings. With the recent launch of Amazon Music HD, music fans no longer need to sacrifice the sound quality that is often compressed for the convenience of streaming music. John Farrey and Jack Rutledge will use this opportunity to talk about the ideation and reception of Amazon Music HD: Amazon Music’s new lossless streaming offering, bringing customers the highest quality streaming audio available for the mass market.

 
 

Friday, October 18, 3:00 pm — 4:00 pm (Mix with the Masters Workshop Stage)

Photo

AES Mix with the Masters Workshop: MM22 - Michael Brauer

Presenter:
Michael Brauer, Michael Brauer - New York, NY, USA

 
 

Friday, October 18, 3:00 pm — 3:30 pm (IP Pavilion Theater)

Photo

AoIP Pavilion: AIP31 - Network Automation with Google Sheets?

Presenter:
Ievgen Kostiukevych, European Broadcasting Union - Le Grand-Saconnex, Genéve, Switzerland

There are multiple ways to automate your network infrastructure, but what if you need a foolproof and quick solution to let non-network engineers automate certain functions of your network? We will tell you what we've done at the JT-NM Tested events to overcome that!

 
 

Friday, October 18, 3:30 pm — 5:00 pm (South Concourse A)

Engineering Brief: EB4 - Posters: Recording and Production

EB4-1 A Comparative Pilot Study and Analysis of Audio Mixing Using Logic Pro X and GarageBand for IOSJiayue Cecilia Wu, University of Colorado Denver - Denver, CO, USA; Orchisama Das, Center for Computer Research in Music and Acoustics (CRMA), Stanford University - Stanford, CA, USA; Vincent DiPasquale, University of Colorado Denver - Denver, CO, USA
In this pilot study we compare two mixes of a song done with GarageBand on iOS and Logic Pro X in a professional studio environment. The audio tracks are recorded and mastered in the same controlled environment by the same engineer. A blind listening survey was given to 10 laypersons and 10 professional studio engineers who have at least 10 years of related experience. 80% lay persons and 60% professional studio engineers reported a higher preference for the Logic Pro X version. To further compare these two productions, we look at (1) short-term perceptual loudness to quantify dynamic range and (2) power spectral densities in different frequency bands to quantify EQ. The analysis provides evidence to back the survey results. The purpose of this study is to examine how, in a real-life scenario, a professional studio engineer can produce the best results using the best plugins, effects, and tools available in GarageBand on iOS and Logic Pro X environment, and how these two results are comparatively perceived by both the general audience and professional audio experts.
Engineering Brief 539 (Download now)

EB4-2 The ANU School of Music Post-Production Suites: Design, Technology, Research, and PedagogySamantha Bennett, Australian National University - Canberra, Australia; Matt Barnes, Australian National University - Canberra, Australia
This engineering brief considers the design, construction, technological capacity, research, and pedagogical remit of two post-production suites built at the ANU School of Music. These suites were constructed simultaneously to the recording studio refurbishment, as detailed in AES e-Brief #397 (2017). This new e-Brief first considers the intention and purpose behind the splitting of a single, large control room into two separate, versatile post-production spaces. Secondly, the e-Brief focuses on design and construction, with consideration given to acoustic treatment, functionality, ergonomic workflow, and aesthetics. The e-Brief also focuses technological capacity and the benefits of built-in limitations. Finally, the post-production suites are considered in the broader context of both the research and pedagogical activities of the School.
Engineering Brief 540 (Download now)

EB4-3 A Case Study of Cultural Influences on Mixing Preference—Targeting Japanese Acoustic Major StudentsToshiki Tajima, Kyushu University - Fukuoka, Japan; Kazuhiko Kawahara, Kyushu University - Fukuoka, Japan
There is no clear rule in the process of mixing in popular music production, so even with the same music materials, different mix engineers may arrive at a completely different mix. In order to solve this highly multidimensional problem, some listening experiments of mixing preference have been conducted in Europe and North America in previous studies. In this study additional experiments targeting Japanese major students in the field of acoustics were conducted in an acoustically treated listening room, and we integrated the data with previous ones and analyzed them together. The result showed a tendency for both British students and Japanese students to prefer (or dislike) the same engineers’ works. Furthermore, an analysis of verbal descriptions for mixing revealed that they gave most attention to similar listening points, such as “vocal,” and “reverb.”
Engineering Brief 541 (Download now)

EB4-4 A Dataset of High-Quality Object-Based ProductionsGiacomo Costantini, University of Southampton - Southampton, UK; Andreas Franck, University of Southampton - Southampton, Hampshire, UK; Chris Pike, BBC R&D - Salford, UK; University of York - York, UK; Jon Francombe, BBC Research and Development - Salford, UK; James Woodcock, University of Salford - Salford, UK; Richard J. Hughes, University of Salford - Salford, Greater Manchester, UK; Philip Coleman, University of Surrey - Guildford, Surrey, UK; Eloise Whitmore, Naked Productions - Manchester, UK; Filippo Maria Fazi, University of Southampton - Southampton, Hampshire, UK
Object-based audio is an emerging paradigm for representing audio content. However, the limited availability of high-quality object-based content and the need for usable production and reproduction tools impede the exploration and evaluation of object-based audio. This engineering brief introduces the S3A object-based production dataset. It comprises a set of object-based scenes as projects for the Reaper digital audio workstation (DAW). They are accompanied by a set of open-source DAW plugins–—the VISR Production Suite—–for creating and reproducing object-based audio. In combination, these resources provide a practical way to experiment with object-based audio and facilitate loudspeaker and headphone reproduction. The dataset is provided to enable a larger audience to experience object-based audio, for use in perceptual experiments, and for audio system evaluation.
Engineering Brief 542 (Download now)

EB4-5 An Open-Access Database of 3D Microphone Array RecordingsHyunkook Lee, University of Huddersfield - Huddersfield, UK; Dale Johnson, University of Huddersfield - Huddersfield, UK
This engineering brief presents open-access 3D sound recordings of musical performances and room impulse responses made using various 3D microphone arrays simultaneously. The microphone arrays comprised OCT-3D, 2L-Cube, PCMA-3D, Decca Tree with height, Hamasaki Square with height, First-order and Higher-order Ambisonics microphone systems, providing more than 250 different front-rear-height combinations. The sound sources recorded were string quartet, piano trio, piano solo, organ, clarinet solo, vocal group, and room impulse responses of a virtual ensemble with 13 source positions captured by all of the microphones. The recordings can be freely downloaded from www.hud.ac.uk/apl/resources. Future studies will use the recordings to formally elicit perceived attributes for 3D recording quality evaluation as well as for spatial audio ear training.
Engineering Brief 543 (Download now)

 
 

Friday, October 18, 3:30 pm — 4:00 pm (Booth 266 (Ex. Fl.))

Photo

Audio Builders Workshop Booth Talk: ABT16 - Learn to Solder

Presenter:
Bob Katz, Digital Domain Mastering - Orlando, FL, USA

 
 

Friday, October 18, 4:00 pm — 5:30 pm (South Concourse B)

Student / Career: SC19 - Sound Girls Mentoring

Please join SoundGirls for a Speed Mentoring Session with Industry Veterans. Get answers to the questions you have about working in professional audio. Sessions will be 30 minutes and we will rotate among mentors.

Recording Arts
Fela Davis: Recording and Live Sound Engineer, Co-Owner of 23db Sound - New York
Jessica Thompson: Audio mastering, restoration and archiving
Catherine Vericolli : owner, engineer and manager of Fivethirteen in Tempe, AZ

Live Sound
Michelle Sabolchick Pettinato FOH Engineer for Elvis Costello,Styx, Mr. Big, Goo Goo Dolls,
Michelle is Co-Founder of SoundGirls
Gil Eva Craig Live Sound Engineer and Partner in Western Audio New Zealand
Barbara Adams Live Sound Engineer and Audio Instructor
Karrie Keyes Monitor Engineer Pearl Jam and Eddie Vedder Karrie is a Co-Founder
SoundGirls
Amanda Raymond Live Sound Engineer

Manufacturing
Sara Elliot VUE AudioTechnik VP of Operations

Mentors subject to change, more mentors TBA

 
 

Friday, October 18, 4:30 pm — 5:30 pm (1E15+16)

Special Event: SE12 - The Past Present and Future of the Legendary Quad Building

Moderator:
Prince Charles Alexander, Berklee College of Music - Boston, MA, USA
Panelists:
DG
Ricky Hosn
David Malekpour, Professional Audio Design, Inc.
Carla Springer

A panel moderated by producer/educator Prince Charles Alexander examines the history and current success of this great epicenter of music production.

 
 

Saturday, October 19, 9:00 am — 5:00 pm (SDA Booth)

Photo

Student / Career: SC20 - Resume Review (for Students, Recent Grads, and Young Professionals)

Moderator:
Alex Kosiorek, Central Sound at Arizona PBS - Phoenix, AZ, USA; Arizona State University - Phoenix, AZ, USA

Students, recent graduates and young professionals… Often your resume is an employer’s first impression of you. Naturally, you want to make a good one. Employer’s often use job search websites to search for candidates. Some use automated software to scan your resume and in some cases, your LinkedIn/social media profiles as well. Questions may arise regarding formatting, length, keywords and phrases so it shows up in searches and lands on the desk of the hiring manager. No matter how refined your resume may be, it is always good to have someone else review your materials. Receive a one-on-one 20-25 minute review of your resume from a hiring manager who is in the audio engineering business. Plus, if time allows, your cover letter and online presence will be reviewed as well.

Sign up at the student (SDA) booth immediately upon arrival. For those who would like to have your resume reviewed on Wednesday, October 17th prior to SDA-1, please email the request to: aesresumereview@outlook.com. You may be requested to upload your resume prior to your appointment for review. Uploaded resumes will only be seen by the moderator and will be deleted at the conclusion of the 147th Pro Audio Convention.

This review will take place during the duration of the convention by appointment only

 
 

Saturday, October 19, 10:30 am — 12:00 pm (South Concourse A)

Poster: P16 - Posters: Spatial Audio

P16-1 Calibration Approaches for Higher Order Ambisonic Microphone ArraysCharles Middlicott, University of Derby - Derby, UK; Sky Labs Brentwood - Essex, UK; Bruce Wiggins, University of Derby - Derby, Derbyshire, UK
Recent years have seen an increase in the capture and production of ambisonic material due to companies such as YouTube and Facebook utilizing ambisonics for spatial audio playback. Consequently, there is now a greater need for affordable high order microphone arrays due to this uptake in technology. This work details the development of a five-channel circular horizontal ambisonic microphone intended as a tool to explore various optimization techniques, focusing on capsule calibration & pre-processing approaches for unmatched capsules.
Convention Paper 10301 (Purchase now)

P16-2 A Qualitative Investigation of Soundbar TheoryJulia Perla, Belmont University - Nashville, TN, USA; Wesley Bulla, Belmont University - Nashville, TN, USA
This study investigated basic acoustic principals and assumptions that form the foundation of soundbar technology. A qualitative listening test compared 12 original soundscape scenes each comprising five stationary and two moving auditory elements. Subjects listened to a 5.1 reference scene and were asked to rate “spectral clarity and richness of sound,” “width and height,” and “immersion and envelopment” of stereophonic, soundbar, and 5.1 versions of each scene. ANOVA revealed a significant effect for all three systems. In all three attribute groups, stereophonic was rated lowest, followed by soundbar, then surround. Results suggest waveguide-based “soundbar technology” might provide a more immersive experience than stereo but will not likely be as immersive as true surround reproduction.
Convention Paper 10302 (Purchase now)

P16-3 The Effect of the Grid Resolution of Binaural Room Acoustic Auralization on Spatial and Timbral FidelityDale Johnson, University of Huddersfield - Huddersfield, UK; Hyunkook Lee, University of Huddersfield - Huddersfield, UK
This paper investigates the effect of the grid resolution of binaural room acoustic auralization on spatial and timbral fidelity. Binaural concert hall stimuli were generated using a virtual acoustics program utilizing image source and ray tracing techniques. Each image source and ray were binaurally synthesized using Lebedev grids of increasing resolution from 6 to 5810 (reference) points. A MUSHRA test was performed where subjects rated the magnitudes of spatial and timbral differences of each stimulus to the reference. Overall, it was found that on the MUSHRA scale, 6 points were perceived to be "Fair," 14 points "Good," and 26 points and above all "Excellent" on the grading scale, for both spatial and timbral fidelity.
Convention Paper 10303 (Purchase now)

P16-4 A Compact Loudspeaker Matrix System to Create 3D Sounds for Personal UsesAya Saito, University of Aizu - Aizuwakamatsu City, Japan; Takahiro Nemoto, University of Aizu - Aizuwakamatsu, Japan; Akira Saji, University of Aizu - Aizuwakamatsu City, Japan; Jie Huang, University of Aizu - Aizuwakamatsu City, Japan
In this paper we propose a new 3D sound system in two-layers as a matrix that has five loudspeakers on each side of the listener. The system is effective for sound localization and compact for personal use. Sound images in this system are created by extended amplitude panning method, with the effect of head-related transfer functions (HRTFs). Performance evaluation of the system for sound localization was made by auditory experiments with listeners. As the result, listeners could distinguish sound image direction localized at any azimuth direction and high elevation direction with small biases.
Convention Paper 10304 (Purchase now)

P16-5 Evaluation of Spatial Audio Quality of the Synthesis of Binaural Room Impulse Responses for New Object PositionsStephan Werner, Technische Universität Ilmenau - Ilmenau, Germany; Florian Klein, Technische Universität Ilmenau - Ilmenau, Germany; Clemens Müller, Technical University of Ilmenau - Ilmenau, Germany
The aim of auditory augmented reality is to create an auditory illusion combining virtual audio objects and scenarios with the perceived real acoustic surrounding. A suitable system like position-dynamic binaural synthesis is needed to minimize perceptual conflicts with the perceived real world. The needed binaural room impulse responses (BRIRs) have to fit the acoustics of the listening room. One approach to minimize the large number of BRIRs for all source-receiver relations is the synthesis of BRIRs using only one measurement in the listening room. The focus of the paper is the evaluation of the spatial audio quality. In most conditions differences in direct-to-reverberant-energy ratio between a reference and the synthesis is below the just noticeable difference. Furthermore, small differences are found for perceived overall difference, distance, and direction perception. Perceived externalization is comparable to the usage of measured BRIRs. Challenges are detected to synthesize more further away sources from a source position that is more close to the listening positions.
Convention Paper 10305 (Purchase now)

P16-6 WithdrawnN/A


P16-7 An Adaptive Crosstalk Cancellation System Using Microphones at the EarsTobias Kabzinski, RWTH Aachen University - Aachen, Germany; Peter Jax, RWTH Aachen University - Aachen, Germany
For the reproduction of binaural signals via loudspeakers, crosstalk cancellation systems are necessary. To compute the crosstalk cancellation filters, the transfer functions between loudspeakers and ears must be given. If the listener moves the filters are usually updated based on a model or previously measured transfer functions. We propose a novel architecture: It is suggested to place microphones close to the listener’s ears to continuously estimate the true transfer functions and use those to adapt the crosstalk cancellation filters. A fast frequency-domain state-space approach is employed for multichannel system tracking. For simulations of slow listener rotations it is demonstrated by objective and subjective means that the proposed system successfully attenuates crosstalk of the direct sound components.
Convention Paper 10307 (Purchase now)

P16-8 Immersive Sound Reproduction in Real Environments Using a Linear Loudspeaker ArrayValeria Bruschi, Univeresità Politecnica delle Marche - Ancona, Italy; Nicola Ortolani, Università Politecnica delle Marche - Ancona (AN), Italy; Stefania Cecchi, Universitá Politecnica della Marche - Ancona, Italy; Francesco Piazza, Universitá Politecnica della Marche - Ancona (AN), Italy
In this paper an immersive sound reproduction system capable of improving the overall listening experience is presented and tested using a loudspeaker linear array. The system aims at providing a channel separation over a broadband spectrum by implementing the RACE (Recursive Ambiophonic Crosstalk Elimination) algorithm and a beamforming algorithm based on a pressure matching approach. A real time implementation of the algorithm has been performed and its performance has been evaluated comparing it with the state of the art. Objective and subjective measurements have con?rmed the effectiveness of the proposed approach.
Convention Paper 10308 (Purchase now)

P16-9 The Influences of Microphone System, Video, and Listening Position on the Perceived Quality of Surround Recording for Sport ContentAimee Moulson, University of Huddersfield - Huddersfield, UK; Hyunkook Lee, University of Huddersfield - Huddersfield, UK
This paper investigates the influences of the recording/reproduction format, video, and listening position on the quality perception of surround ambience recordings for sporting events. Two microphone systems—First Order Ambisonics (FOA) and Equal Segment Microphone Array (ESMA)—were compared in both 4-channel (2D) and 8-channel (3D) loudspeaker reproductions. One subject group tested audio-only conditions while the other group was presented with video as well as audio. Overall, the ESMA was rated significantly higher than the FOA for all quality attributes tested regardless of the presence of video. The 2D and 3D reproductions did not have a significant difference within each microphone system. Video had a significant interaction with the microphone system and listening position depending on the attribute.
Convention Paper 10309 (Purchase now)

P16-10 Sound Design and Reproduction Techniques for Co-Located Narrative VR ExperiencesMarta Gospodarek, New York University - New York, NY, USA; Andrea Genovese, New York University - New York, NY, USA; Dennis Dembeck, New York University - New York, NY, USA; Flavorlab; Corinne Brenner, New York University - New York, NY, USA; Agnieszka Roginska, New York University - New York, NY, USA; Ken Perlin, New York University - New York, NY, USA
Immersive co-located theatre aims to bring the social aspects of traditional cinematic and theatrical experience into Virtual Reality (VR). Within these VR environments, participants can see and hear each other, while their virtual seating location corresponds to their actual position in the physical space. These elements create a realistic sense of presence and communication, which enables an audience to create a cognitive impression of a shared virtual space. This article presents a theoretical framework behind the design principles, challenges and factors involved in the sound production of co-located VR cinematic productions, followed by a case-study discussion examining the implementation of an example system for a 6-minute cinematic experience for 30 simultaneous users. A hybrid reproduction system is proposed for the delivery of an effective sound design for shared cinematic VR. Winner of the 147th AES Convention Best Peer-Reviewed Paper Award
Convention Paper 10287 (Purchase now)

 
 

Saturday, October 19, 1:30 pm — 2:30 pm (1E15+16)

Photo

Special Event: SE13 - The Making of the #1 LP "Help Us Stranger" by The Raconteurs

Presenter:
Vance Powell, Global Positioning Services - New York, NY, USA

Grammy Award winning mix engineer Vance Powell shares the inside stories on the making of The Raconteurs #1 album "Help Me Stranger" lead by Jack White, released through Third Man Records.

 
 

Saturday, October 19, 3:00 pm — 4:30 pm (South Concourse A)

Poster: P18 - Posters: Perception

P18-1 Comparison of Human and Machine Recognition of Electric Guitar TypesRenato Profeta, Ilmenau University of Technology - Ilmenau, Germany; Gerald Schuller, Ilmenau University of Technology - IImenau, Germany; Fraunhofer Institute for Digital Media technology (IDMT) - Ilmenau, Germany
The classification of musical instruments for instruments of the same type is a challenging task not only to experienced musicians but also in music information retrieval. The goal of this paper is to understand how guitar players with different experience levels perform in distinguishing audio recordings of single guitar notes from two iconic guitar models and to use this knowledge as a baseline to evaluate the performance of machine learning algorithms performing a similar task. For this purpose we conducted a blind listening test with 236 participants in which they listened to 4 single notes from 4 different guitars and had to classify them as a Fender Stratocaster or an Epiphone Les Paul. We found out that only 44% of the participants could correctly classify all 4 guitar notes. We also performed machine learning experiments using k-Nearest Neighbours (kNN) and Support Vector Machines (SVM) algorithms applied to a classification problem with 1292 notes from different Stratocaster and Les Paul guitars. The SVM algorithm had an accuracy of 93.9%, correctly predicting 139 audio samples from the 148 present in the testing set.
Convention Paper 10315 (Purchase now)

P18-2 Preference for Harmonic Intervals Based on Overtone Content of Complex TonesBenjamin Fox, Belmont University - Nashville, TN, USA; Wesley Bulla, Belmont University - Nashville, TN, USA
This study investigated whether or not overtone structure generated preferential differences for harmonic intervals. The purpose of this study was to determine if the structure of a complex tone affects the perception of consonance in harmonic intervals. Prior studies suggest harmonicity as the basis for so-called “consonance” while others suggest exact ratios are not necessary. This test examined listener responses across three tonal “types” through a randomized double-blind trinomial forced-choice format. Stimuli types used full, odd, and even overtone series at three relative-magnitude loudness levels. Results revealed no effect of loudness and a generalized but highly variable trend for the even overtone series. However, some subjects exhibited a very strong preference for certain overtone combinations, while others demonstrated no preference.
Convention Paper 10316 (Purchase now)

P18-3 Just Noticeable Difference for Dynamic Range Compression via “Limiting” of a Stereophonic MixChristopher Hickman, Belmont University - Nashville, TN, USA; Wesley Bulla, Belmont University - Nashville, TN, USA
This study focused on the ability of listeners to discern the presence of dynamic range compression (DRC) when applied to a stereo recording. Past studies have primarily focused on listener preferences for stereophonic master recordings with varying levels of DRC. A modified two-down one-up adaptive test presented subjects with an increasingly “limited” stereophonic mix to determine the 70.7% response threshold. Results of this study suggest that DRC settings considered “normal” in recorded music production may be imperceptible when playback levels are loudness-matched. Outcomes of this experiment indicate the use of so-called “limiting” for commercial purposes, such as signal chain control, may have no influence on perceived quality; whereas, uses for perceived aesthetic advantages should be reconsidered.
Convention Paper 10317 (Purchase now)

P18-4 Discrimination of High-Resolution Audio without MusicYuki Fukuda, Hiroshima City University - Hiroshima-shi, Japan; Shunsuke Ishimitsu, Hiroshima City University - Hiroshima, Japan
Nowadays, High-Resolution (Hi-Res) audio format, which has higher sampling frequency (Fs) and quantization bit number than the Compact disc (CD) format, is becoming extremely popular. Several studies have been conducted to clarify whether these two formats can be distinguished. However, most of the studies were conducted by only using music sources to reach a conclusion. In this paper we will try to bring out the problems due to the primary use of music sources for experimental purposes. We will also answer the question related to discrimination between hi-Res and CD formats using sources other than music, such as noise.
Convention Paper 10318 (Purchase now)

P18-5 Subjective Evaluation of Multichannel Audio and Stereo on Cell PhonesFesal Toosy, University of Central Punjab - Lahore, Pakistan; Muhammad Sarwar Ehsan, University of Central Punjab - Lahore, Pakistan
With the increasing trend of using smart phones and other handheld electronic devices for accessing the internet, playback of audio in multichannel format would eventually gain popularity on such devices. Given the limited options for audio output on handheld electronic devices, it is important to know if multichannel audio offers an improvement in audio quality over other existing formats. This paper shows a subjective assessment test of multichannel audio versus stereo while played on a mobile phone using headphones. The results show that multichannel audio improves on perceived audio quality as compared to stereo.
Convention Paper 10319 (Purchase now)

 
 


Return to Exhibits-Plus Badge Events