Events

AES 60th Conference: Keynote Speakers

Dereverberation and Reverberation of Audio, Music, and Speech

Home | Call for Papers | Demonstrations | Paper Submission | Registration | Committee | Venue | Travel | Contact Keynote Speakers | Technical Program | Social Program

The AES 60th Conference will feature three keynote lectures by experts in different research areas, related to reverberation and dereverberation.

Keynote 1

Wednesday Feb. 3, 2016, 09:20 – 10:20 (Auditorium)

More Than Fifty Years of Artificial Reverberation
Vesa Välimäki (Aalto University, Finland)

Recent research related to artificial reverberation is reviewed. Advances in delay networks, convolution-based techniques, physical room models, and virtual analog reverberation models are described. Many new developments are related to the feedback delay network, which is still the most popular parametric reverberation method. Additionally, three specific methods are discussed in detail: velvet-noise reverberation methods, scattering delay networks, and a modal architecture for artificial reverberation. It becomes evident that research on artificial reverberation and related topics continues to be as active as ever.

Vesa ValimakiVesa Välimäki is a Professor of audio signal processing at Aalto University, Espoo, Finland. He received the Master of Science in Technology and the Doctor of Science in Technology degrees, both in electrical engineering, from the Helsinki University of Technology, Espoo, Finland, in 1992 and 1995, respectively. In 1996, he was a Postdoctoral Research Fellow at the University of Westminster, London, UK. In 2008-2009, he was a Visiting Scholar at the Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, Stanford, CA. He is a Fellow of the AES, a Fellow of the IEEE, and a Life Member of the Acoustical Society of Finland. He was the Papers Chair of the AES 22nd International Conference on Virtual, Synthetic, and Entertainment Audio in 2002 and was the Chairman of the International Conference on Digital Audio Effects, DAFx-08, in 2008. Currently he is a Senior Area Editor of the IEEE/ACM Transactions on Audio, Speech, and Language Processing and an Editorial Board Member of The Scientific World Journal. He is the Guest Editor of the forthcoming special issue of Applied Sciences on audio signal processing.


Keynote 2

Thursday Feb. 4, 2016, 09:00 – 10:00 (Auditorium)

How do humans benefit from binaural listening when recognizing speech in noisy and reverberant conditions?
Thomas Brand (University of Oldenburg, Germany)

The human auditory system uses the directivity of the ears together with better-ear-listening and binaural processing for astonishingly high speech recognition in complex listening situations including interfering sound sources and reverberation. The relation between the spatial direction of the target speech source and interfering sound sources influences intelligibility in such complex listening conditions. This can be quantitatively described using a binaural speech intelligibility model (BSIM) that applies a binaural equalization-and-cancellation processing stage. The influence of early reflections and their directions as well as the influence of reverberation can be predicted by analysing the head-related room impulse response (HRIR). Early reflections of the target speech signal can be integrated to the target. Later reflections and reverberation have to be regarded like noise.
(Joint work with Anna Warzybok, Jan Rennies, and Christopher Hauth)

Thomas BrandThomas Brand studied physics at the University of Göttingen, Germany. He received his diploma in 1994 and his Ph.D. in 1999. He is the head of the research group “Speech audiology” within the “Medizinische Physik” at the Department of Medical Physics and Acoustics at the University of Oldenburg, Germany. His research interests are the development of psychoacoustic procedures for assessing speech and loudness perception, psychoacoustic models of speech intelligibility in adverse acoustic conditions for listeners with normal and with impaired hearing, binaural hearing, and linguistic processing and attention in speech perception. Thomas Brand is the coordinator of the study program “Hearing Technology and Audiology” (Master) at the University of Oldenburg, where he also teaches.


Keynote 3

Friday Feb. 5, 2016, 09:00 – 10:00 (Auditorium)

Fifty Years of Reverberation Reduction: From analog signal processing to machine learning
Emanuël Habets (International Audio Laboratories Erlangen, Germany; Fraunhofer IIS, Germany)

The problem of reverberation reduction has been of interest to many researchers over the past five decades, and a wide variety of solutions has been proposed. This interest is largely driven by the demand and desire for high-quality hands-free communication devices, improved hearing aids, robust human-machine interfaces, and audio upmixing systems. This talk will provide an overview of different reverberation models as well as fundamentally different reverberation reduction approaches. A broad range of approaches is covered ranging from analog signal processing approaches introduced in the 60's to state-of-the-art machine learning approaches.

Emanuel HabetsEmanuël Habets is an Associate Professor at the International Audio Laboratories Erlangen (a joint institution of the Friedrich-Alexander-University Erlangen-Nürnberg and Fraunhofer IIS), and Head of the Spatial Audio Research Group at Fraunhofer IIS, Germany.
He received the M.Sc and Ph.D. degrees in electrical engineering from the Technische Universiteit Eindhoven (TU/e), The Netherlands, in 2002 and 2007, respectively. From 2007 until 2009, he was a Postdoctoral Fellow at the Technion - Israel Institute of Technology and at the Bar-Ilan University in Ramat-Gan, Israel. From 2009 until 2010, he was a Research Fellow at Imperial College London, United Kingdom.
His research interests are in the areas of audio and acoustic signal processing, and he has worked in particular on dereverberation, artificial reverberation, microphone and loudspeaker array processing, acoustic system identification and equalization, and source localization and tracking.
He is the recipient, with S. Gannot and I. Cohen, of the 2014 IEEE Signal Processing Letters Best Paper Award.
He was a General Co-Chair of the 2013 International Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) in New Paltz, New York, and of the 2014 International Conference on Spatial Audio (ICSA) in Erlangen, Germany. He was a Guest Editor of the IEEE Journal of Selected Topics in Signal Processing, and of the EURASIP Journal on Advances in Signal Processing. Currently, he is an Associate Editor of the IEEE Signal Processing Letters, a member of the IEEE Technical Committee on Audio and Acoustic Signal Processing, and a Vice-Chair of the EURASIP Special Area Team on Acoustic, Sound and Music Signal Processing.


AES - Audio Engineering Society