Events

Committee

Home  |   Keynote Speaker   |   Registration   |   Schedule   |   Presentations   |   Biographies   |   Call for Presentations

 

Event Chair

Jonathan Wyner

AES President-Elect (2020)

 

Jonathan Wyner is a GRAMMY-nominated mastering engineer and educator, with credits on over 5,000 records spanning 35 years for the likes of David Bowie, Bruce Springsteen, Nirvana, Aerosmith, Miles Davis, Pink Floyd, and James Taylor. Jonathan now finds himself working alongside the product development team at iZotope as Director of Education, helping to develop technology at the intersection of AI and music production. He continues his mastering work from M Works Mastering studios in Cambridge, MA, which he established in 1991. Jonathan is also a Music Production and Engineering professor at Berklee College of Music, and has conducted seminars around the world, presenting with industry leaders like George Massenburg, Doug Sax, and Bob Ludwig.

Program Chair

Christian Uhle

Chief Scientist

Fraunhofer IIS, Erlangen, German

 

 

Christian Uhle is chief scientist in the Audio and Media Technologies division of the Fraunhofer IIS, Erlangen, Germany, and in the International Audio Laboratories Erlangen. He received Dipl.-Ing. and PhD degrees from the Technical University of Ilmenau, Germany, in 1997 and 2008, respectively. His research comprises automotive sound reproduction, semantic audio processing, blind source separation, dialog enhancement, digital audio effects, natural language processing, and the application of machine learning to process audio signals. He is a member of the AES and chairs the AES Technical Committee on Semantic Audio Analysis.

Program Chair

Gordon Wichern

Principal Research Scientist

Mitsubishi Electric Research Laboratories, Cambridge, MA, USA.

 

 

Gordon Wichern is a Principal Research Scientist at Mitsubishi Electric Research Laboratories (MERL) in Cambridge, MA. He received his Ph.D. from Arizona State University in electrical engineering with a concentration in arts, media and engineering, where he was supported by a National Science Foundation (NSF) Integrative Graduate Education and Research Traineeship (IGERT) for his work on environmental sound recognition.  He was previously a member of the research team at iZotope, inc. where he focused on applying novel signal processing and machine learning techniques to music and post-production software, and a member of the Technical Staff at MIT Lincoln Laboratory where he worked on radar signal processing.  His research interests include audio, music, and speech signal processing, machine learning, and psychoacoustics.

Program Chair

Andy Sarroff

Research Engineer

iZotope, Inc, Cambridge, MA, USA

 

Andy Sarroff is a Research Engineer at iZotope, Inc. in Cambridge, MA. Andy received an MM in Music Technology from New York University and a PhD in Computer Science from Dartmouth College with a dissertation on "Complex Neural Networks for Audio." He has been a visiting researcher in the Speech & Audio group at Mitsubishi Electric Research Laboratories (MERL), as well as Columbia University's Laboratory for the Recognition and Organization of Speech and Audio (LabROSA). Andy's research interests include machine learning, machine listening and perception, and digital audio signal processing.

Event Coordinator

Pippin Bongiovanni

Musicologist and iZotope Content Marketer

iZotope Inc, Cambridge, MA, USA

 

Pippin Bongiovanni is a writer, musicologist, and member of the iZotope Education team. She received an MMus in Modern Musicology with Distinction from the University of Edinburgh, and a BA in Popular Music History from Hampshire College. She was previously the Web Editor for the International Association of Popular Music Studies, US Branch. Her work explores the use of audio in AR and VR environments with further interests in ludomusicology, the role of nostalgia in audio, and the psychology of performative alter egos.

Logistics Chair

David Moffat

Research Assistant in Audio Signal Processing and AI

Faculty of Arts, Humanities and Business

University of Plymouth, UK

 

David Moffat is a Lecturer in Sound and Music Computing at the University of Plymouth, UK. He previously worked as a postdoc within the Audio Engineering Group of the Centre for Digital Music at Queen Mary University London, where we also received his PhD in Computer Science. His research focuses on applied audio signal processing and AI, particularly in the development of intelligent mixing and audio production tools through the use of semantic audio and machine learning.

 

AES - Audio Engineering Society