AES E-Library

AES E-Library

Disentangled estimation of reverberation parameters using temporal convolutional networks

Document Thumbnail

Reverberation is ubiquitous in everyday listening environments, from meeting rooms to concert halls and record-ing studios. While reverberation is usually described by the reverberation time, getting further insight concerning the characteristics of a room requires to conduct acoustic measurements and calculate each reverberation param-eter manually. In this study, we propose ReverbNet, an end-to-end deep learning-based system to non-intrusively estimate multiple reverberation parameters from a single speech utterance. The proposed approach is evaluated using simulated room reverberation by two popular effect processors. We show that the proposed approach can jointly estimate multiple reverberation parameters from speech signals and can generalise to unseen speakers and diverse simulated environments. The results also indicate that the use of multiple branches disentangles the embedding space from misalignments between input features and subtasks.

Authors:
Affiliation:
AES Convention: Paper Number:
Publication Date:
Subject:
Permalink: https://www.aes.org/e-lib/browse.cfm?elib=21706

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Learn more about the AES E-Library

E-Library Location:

Start a discussion about this paper!


AES - Audio Engineering Society