Events

Presentations

 Home  |   Committee   |   Keynote Speaker   |   Registration   |   Schedule   |   Biographies   |   Call for Presentations

Keynote

Holly Herndon

Machine Learning & the Creative Process


Over the last few years Holly Herndon has developed, raised and taught her A.I. baby named Spawn. In this artist talk, she’ll discuss her unique approach to A.I., how it feeds her creative process, and the future of A.I. research.

 

Invited Talks

Jason Hockman

Advances in breakbeat analysis, synthesis and rhythm transformation

David Kant

Machine Listening as a Generative Model: Machine Learning for Music Composition

Bryan Pardo

Using machine learning to improve voice recording, remix music and transcribe melodies

Jesse Engel

Neural Audio Synthesis for Music

Alexander Lerch

Audio Content Analysis

Fabian-Robert Stöter and Stefan Uhlich

Current Trends in Audio Source Separation

JT Colonel and Christian Steinmetz

Deep Learning Approaches to Multitrack Mixing

Ishwarya Ananthabhotla

“Cognitive Audio”: Enabling Machine Learning Systems with an Understanding of How We Hear

 

Breakout Sessions 

Scott H. Hawley (Belmont University)

Learning Tunable Audio Effects via Neural Networks with Self-Attention

Stephen Travis Pope (FASTLab, TRQK)

30 Years of Music Information Retrieval Applications

Jordie Shier (University of Victoria)

Programming Synthesizers with Deep Learning Networks

Ethan Manilow (Northwestern University)

Synthesize, Separate, and Repeat: Some Notes on Incorporating Notes into Source Separation

Cumhur Erkut (Aalborg University Copenhagen)

Developing a single-channel speech denoising algorithm with deep learning

Cumhur Erkut (Aalborg University Copenhagen)

Towards Differentiable Sound and Music Computing

Marko Stamenovic (Bose)

TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids

Yugo Mafra Kuno, Bruno Sanches Masiero, and Nilesh Madhu (School of Electrical and Computer Engineering, University of Campinas)

Analysis of a neural network approach to linear beamforming

Shubhr Singh (Queen Mary University of London)

Parameter automation for dynamic range compression using a siamese neural network and reference audio

Jonathan D Ziegler (Stuttgart Media University, University of Tuebingen)

An End-to-End Approach to Neural Filter-and-Sum Beamforming

Ana Gabriela Pandrea (Universitat Pompeu Fabra)

End-to-End Music Emotion Recognition: Towards Language-Sensitive Models

Justin Swaney (Samply, Inc.)

Deep audio embeddings for automatic sample labeling and visualization

Danilo Comminiello (Sapienza University of Rome)

Learning 3D Sound Sources in the Quaternion Domain

 

AES - Audio Engineering Society