Christian Steinmetz of Clemson University
Tell us a little about yourself. Where are you from and what do you study?I am originally from South Carolina in the US where I studied Electrical Engineering and Audio Technology during my undergrad at Clemson University. Currently I am a master’s student at Universitat Pompeu Fabra in Barcelona studying Sound and Music Computing within the Music Technology Group (MTG).
What initiated your passion for audio? When did it start?My interest in audio came out of my interest first as a music listener. Early on I became involved in building speaker enclosures, demoing different headphones, and experimenting with amplifiers to try and build a better sounding listening system. Eventually, this lead me into the world of music production and audio engineering because I was interested in making recordings that sounded the way I wanted. Throughout high school and my undergrad, I have worked as a recording, mixing, and mastering engineer. At the same time I have been focused on applying engineering in the construction of tools that advance the field of audio engineering, aiming to develop tools that assist in and extend the workflow of audio engineers. I am continuing this line of research in my thesis here at the MTG, with the application of deep learning to tasks in music signal processing.
Tell us about the production of your submission. What is the story behind it? What inspired it? How long did you work on it? Was it your first entry?My project, flowEQ, aims to provide a simplified interface to the classic five-band parametric equalizer. In order to effectively utilize the parametric EQ, an audio engineer must have an intimate understanding of the gain, center frequency, and Q controls, as well as how multiple bands can be used in tandem to achieve a desired timbral adjustment. For the ametuar audio engineer or musician this often presents too much complexity, and flowEQ aims to solve this problem by providing an intelligent interface geared towards these kinds of users. By applying some of the latest techniques in machine learning, like the disentangled variational autoencoder (β-VAE), we can utilize data of equalizer settings collected from audio engineers (via the SAFE-DB) to learn a well structured, low dimensional representation of the parameter space of the EQ. This low dimensional space then allows the user to control all thirteen knobs of the equalizer with only two controls, for example. For the inexperienced user this presents a powerful way to search across possible EQ configurations, where they can use their ears to find the desired effect, using knowledge aggregated from trained audio engineers. If you are interested in learning more about how all of this works check out the project webpage (https://flowEQ.ml) where I go into all of the nitty-gritty details. This was not my first entry at AES. Last year I presented my reverb plugin, NeuralReverberator, in the MATLAB plugin competition, and the year before that I presented a phase analysis plugin that aims to help audio engineers improve microphone placement for better drum recordings.
What/Who made you join AES?I first learned of the AES through my audio technology professor at Clemson. After discovering the journal and diving into all of the interesting research being published, I decided to join. Shortly after learning about the yearly convention held in NYC, I set a goal for myself to find a way to attend. I came up with a project idea and built a plugin to present during the Student Design Competition. After sharing it with my professor, I was able to receive funding from my department to travel to the convention. Attending the AES Convention for the first time in 2017 was one of the major moments in my development as a researcher, and solidified my interest in continuing my research in this field.
Tell us about your favorite experiences at the 147th AES Convention in New York!My favorite part of the convention was getting to present and share my project with other people interested in audio engineering. Getting to meet new people with the same interests and their own unique perspectives is, for me, one of the highlights of a convention like AES. In addition, I enjoyed attending many of talks and paper sessions where I got to hear from some of the most influential researchers in the audio community.
Posted: Wednesday, February 19, 2020