Events

Biographies

Home  |   Committee   |   Keynote Speaker   |   Registration   |   Schedule   |   Presentations   |   Call for Presentations

  

 

Invited Speakers

Jason Hockman (Birmingham City University)

Jason Hockman is a sound artist, record label owner and associate professor of audio engineering at Birmingham City University. He is a member of the Digital Media Technology Laboratory (DMT Lab), in which he leads the Sound and Music (SoMA) Group for computational analysis of sound and music and digital audio processing. Jason conducts research in music informatics, machine listening and computational musicology, with a focus on rhythm and metre detection, music transcription, and content-based audio effects. He holds a PhD in music research from McGill University (CA) and a Masters in music technology from New York University (USA). As an electronic musician, Jason has had several critically-acclaimed releases on a variety of established international record labels including his own Detuned Transmissions imprint.

David Kant (University of California, Santa Cruz)

David Kant is a composer and researcher based in Santa Cruz, CA where he teaches electronic music and artificial intelligence at the University of California, Santa Cruz. In his music, research, and teaching, he focuses on critical perspectives in AI, as well as how machine learning can be used for audio synthesis, music composition, and music analysis. Kant is the bandleader of the Happy Valley Band as well as a founding member of Indexical, a performing arts nonprofit dedicated to experimentation in music.

Bryan Pardo (Northwestern University)

Bryan Pardo is co-director of the Northwestern University HCI+Design institute and head of the Interactive Audio Lab. Prof. Pardo has appointments in the Department of Computer Science and Department of Radio, Television and Film. He received a M. Mus. in Jazz Studies in 2001 and a Ph.D. in Computer Science in 2005, both from the University of Michigan. He has authored over 100 peer-reviewed publications. He has developed speech analysis software for the Speech and Hearing department of the Ohio State University, statistical software for SPSS and worked as a machine learning researcher for General Dynamics. He has collaborated on and developed technologies acquired and patented by companies like Bose, Adobe and Ear Machine. While finishing his doctorate, he taught in the Music Department of Madonna University. When he is not teaching or researching, he performs on saxophone and clarinet with the bands Son Monarcas and The East Loop.

Jesse Engel (Google Brain)

Jesse Engel is lead research scientist on Magenta, a research team within Google Brain exploring the role of machine learning in creative applications. He did his Bachelors, and Ph.D., at UC Berkeley, studying the martian atmosphere and quantum dot nanoelectronics respectively, and a joint postdoc at Berkeley and Stanford on neuromorphic computing. Afterward, he worked with Andrew Ng to help found the Baidu Silicon Valley AI Lab and was a key contributor to DeepSpeech 2, a speech recognition system named one of the ‘Top 10 Breakthroughs of 2016’ by MIT Technology Review. He joined Google Brain in 2016, where his research on Magenta includes creating new generative models for audio (DDSP, NSynth), symbolic music (MusicVAE, GrooVAE), adapting to user preferences (Latent Constraints, MIDI-Me), and work to close the gap between research and musical applications (NSynth Super, Magenta Studio). Outside of work, he is also a professional-level jazz guitarist, and likes to include in his bio that he once played an opening set for the Dalai Lama at the Greek Theatre.

Alexander Lerch (Georgia Institute of Technology)

Alexander Lerch is Associate Professor at the Center for Music Technology, Georgia Institute of Technology. Lerch teaches computers to listen to and comprehend music. His research positions him at the intersection of signal processing, machine learning, and music and creates artificially intelligent software for music generation, production, and consumption. Lerch is author of the textbook "An Introduction to Audio Content Analysis" (IEEE/Wiley 2012). Before he joined Georgia Tech, Lerch was Co-Founder and Head of Research at the company zplane.development, an industry leader in music technology licensing. The technologies he worked on at zplane include algorithms such as time-stretching and automatic key detection. zplane technologies are nowadays used by millions of musicians and producers world-wide.

Fabian-Robert Stoeter (Inria)

Fabian-Robert Stöter received his diploma degree in electrical engineering in 2012 from the Leibniz Universität Hannover and obtained his Ph.D. degree in the field of audio separation using machine learning in the research group of Bernd Edler at the International Audio Laboratories Erlangen, Germany. Since 2018, he is a post-doctoral researcher at Inria, France. His research interests include supervised and unsupervised methods for audio source separation and signal analysis of highly overlapped acoustical sounds, and recently he is also interested in ML for ecoacoustics and biodiversity.

Stefan Uhlich (Sony R&D Center)

Stefan Uhlich received the Dipl.-Ing. and PhD degree in electrical engineering from the University of Stuttgart, Germany, in 2006 and 2012, respectively. From 2007 to 2011 he was a research assistant at the Chair of System Theory and Signal Processing, University of Stuttgart. In this time he worked in the area of statistical signal processing, focusing especially on parameter estimation theory and methods. Since 2011, he is with the Sony Stuttgart Technology Center where he works as a Principal Engineer on problems in music source separation, speech enhancement and deep neural network compactization.

JT Colonel (Queen Mary University of London)

Joseph T Colonel is a second year PhD student in the Centre for Digital Music at Queen Mary University of London. His work focuses on applying machine learning and neural networks to music production behavior modelling. He received his bachelor's and master's degrees in electrical engineering from the Cooper Union in New York City, developing neural network audio effects for timbre interpolation and synthesis.

Christian Steinmetz (Queen Mary University of London)

Christian Steinmetz is a first-year PhD student in AI and Music within the Centre for Digital Music at Queen Mary University of London. His research interest is in extending deep learning approaches for high fidelity audio processing with applications in music production. He holds a master’s degree in Sound and Music Computing from Universitat Pompeu Fabra, where he worked on extending neural audio effects for applications in automated mutlitrack mixing, as well as bachelor degrees in both Electrical Engineering and Audio Technology from Clemson University. He has also worked as an intern at Facebook Reality Labs, Dolby Labs, Bose, and Cirrus Logic.

Ishwarya Ananthabhotla (MIT Media Lab)

Ishwarya is a doctoral candidate in the Responsive Environments group at the MIT Media Lab, Cambridge, MA, USA. She is interested in problems at the intersection of audio signal processing and machine learning, such as intelligent audio compression, summarization, manipulation, and generation. As a part of her thesis work, she has been exploring ways to capitalize on principles of human perception, cognition, memory, and attention to re-think traditional paradigms for ubiquitous audio capture, representation, and retrieval. Ishwarya graduated from MIT with an SB (2015) and an M.Eng (2016) in Electrical Engineering and Computer Science, and has spent time interning at Spotify Research and Facebook Reality Labs over the course of her PhD. She was supported by the NSF Graduate Research Fellowship from 2016-2019, and is currently supported by the 2020 Apple AI/ ML Fellowship. Outside of her research, she is passionate about music and performance.

  

 

Panelists

Elias Kokkinis (Co-founder and CTO, Accusonus)

Elias Kokkinis is the co-founder and CTO of Accusonus. He holds a Diploma in Electrical Engineering and a Ph.D. in audio signal processing from the University of Patras. He is the co-inventor of 6 US patents on signal processing. As the CTO of Accusonus, he has led the research efforts of the R&D team that produced the novel technologies driving the company’s successful products. He has several years of experience as a sound engineer in recording studios and concert halls. He was also a drummer active in the local music scene of Patras.

Drew Silverstein (Co-founder and CEO, Amper Music, Inc.)

Drew Silverstein is the CEO and co-founder of Amper Music. Founded in 2014, Amper Music combines the highest levels of artistry with groundbreaking technology to empower anyone to create unique, professional music, instantly. He is part of the Forbes class of 2018 "30 Under 30" list for music. Prior to Amper Music, Drew was an award-winning composer, producer, and songwriter for film, television, and video games in Los Angeles at Sonic Fuel Studios. Drew graduated from Vanderbilt University's Blair School of Music, where he studied Music Composition and Italian, and holds an MBA from Columbia Business School.

Maya Ackerman (Co-founder and CEO, WaveAI)

Dr. Maya Ackerman is a leading expert on Artificial Intelligence and Computational Creativity, named ``Woman of influence” by the Silicon Valley Business Journal. Ackerman is CEO/Co-founder of WaveAI, where she and her team have created the leading AI-driven technology for assisting music professionals and aspiring artists in the creation of original vocal songs, with products including LyricStudio and ALYSIA. Ackerman's work has been widely showcased in the press, including NBC News, New Scientist, Grammy.com, and international television. Dr. Ackerman had been an invited speaker at the United Nations, Google, IBM Research, and Stanford University, amongst other top venues. She earned her PhD from the University of Waterloo, held postdoctoral fellowships at Caltech and UC San Diego, and is a Computer Science and Engineering Professor at Santa Clara University. Maya is also an opera singer and music producer. https://www.wave-ai.net/

Julian Parker (Principal Software Engineer, Native Instruments)

Julian Parker is the Principal Software Engineer for Machine Learning and DSP at Native Instruments GmbH. He received his bachelors degree in Natural Sciences from the University of Cambridge in 2005, and an M.Sc. in Acoustics & Music Technology from the University of Edinburgh in 2008. In 2013, he completed his doctoral degree at Aalto University, Finland. Since 2013 he has been employed at Native Instruments, where he has worked on a wide range of products in the sound synthesis and effects domain. He now leads DSP and audio-focused ML research and development at the company. Julian is an active member of the academic community and has published on a variety of topics including reverberation, physical modelling, digital filter design and machine learning.

  

 

Moderators

Jonathan Bailey (CTO, iZotope)

Jonathan Bailey is a technologist and musician with over two decades of experience building innovative products. As CTO of iZotope, Jonathan is passionate about the invention of new technologies for creative applications and leads the vibrant and talented product, research and engineering teams in the development of iZotope’s award-winning line of products. Prior to joining iZotope, Jonathan was CTO at Curious Brain, a mobile music start-up that developed musical learning apps, and Lead Developer at Sonik Architects, where he collaborated with BT on the early development of Breaktweaker. Jonathan is also a drummer and composer whose work has spanned performances with funk trombonist Fred Wesley to authoring algorithmic music for the 2011 Venice Biennale. Jonathan is a member of both the Audio Engineering Society (AES) and National Academy of Television Arts and Sciences (The Emmys) and earned degrees in Computer Science at Stanford University and Music Synthesis at Berklee College of Music.

Jay LeBoeuf (Head Of Business Development, Descript)

Jay leads Business Development at Descript - a company creating tools for new media creators. Since joining, Jay's led to the company to become a near-industry standard for how podcasts are created. Previously, Jay founded industry-education-nonprofit Real Industry, which continues serving over 40 universities.

In 2008, Jay founded Imagine Research, an early music technology startup that pioneered artificial intelligence in music / post production and content recommendation. In 2012, iZotope acquired Imagine Research and Jay joined iZotope's executive team leading research & development, technology strategy, and intellectual property. His career began with a decade serving as engineer and then researcher in the Advanced Technology Group at Avid, contributing to Grammy and Academy Award-winning Pro Tools software and hardware.

For over 10 years, Jay has lectured on media technology and business at Stanford. He holds adjunct appointments at Carnegie Mellon and Univ of Michigan, and was honored as a Bloomberg Business Innovator.

AES - Audio Engineering Society