Clean Audio for TV broadcast: An Object-Based Approach for Hearing-Impaired Viewers - April 2015
Audibility of a CD-Standard A/DA/A Loop Inserted into High-Resolution Audio Playback - September 2007
Sound Board: Food for Thought, Aesthetics in Orchestra Recording - April 2015
Acoustic Blind Source Separation using Graphical Models
We outline examples for machine learning algorithms using graphical models to represent speech signals in a systematic manner. Linear data generative models have recently gained popularity because they are able to learn efficient codes for sound signals and allow the analysis of important sound features and their characteristics to model different types of sounds, individual speech and speaker characteristics or classes of speakers. The generative model principle can be extended in time and space to handle dynamics and environmental acoustics. We present two examples for blind source separation in a graphical model. First, a method for solving the difficult problem of separating multiple sources given only a single channel observation. Second, a method for treating multi-channel observations that takes into account reverberations, sensor noise and other real environment challenges.
Click to purchase paper or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $20 for non-members, $5 for AES members and is free for E-Library subscribers.