Journal Forum

Synthetic Reverberator - January 1960
1 comment

Multichannel Sound Reproduction Quality Improves with Angular Separation of Direct and Reflected Sounds - June 2015
1 comment

Clean Audio for TV broadcast: An Object-Based Approach for Hearing-Impaired Viewers - April 2015

Access Journal Forum

AES E-Library

Extraction of Speech Transmission Index from Speech Signals Using Artificial Neural Networks

Document Thumbnail

This paper presents a novel method to extract Speech Transmission Index (STI) from reverberated speech utterances using an artificial neural network. The convolutions of anechoic speech signals and simulated impulse responses of rooms of various kinds are used to train the artificial neural network. A time to frequency domain transformation algorithm is proposed as the pre-processor. A multi-layered feed forward neural network trained by back-propagation is adopted. Once trained, the neural network can accurately estimate Speech Transmission Index from speech signals received by a microphone in rooms. This approach utilises a naturalistic sound source, speech, and hence has potential to facilitate occupied measurement.

AES Convention: Paper Number:
Publication Date:

Click to purchase paper or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $20 for non-members, $5 for AES members and is free for E-Library subscribers.

Learn more about the AES E-Library

E-Library Location:

Start a discussion about this paper!

Facebook   Twitter   LinkedIn   Google+   YouTube   RSS News Feeds  
AES - Audio Engineering Society