Journal Forum

Synthetic Reverberator - January 1960

Sound Board: High-Resolution Audio - October 2015

Synchronized Swept-Sine: Theory, Application, and Implementation - October 2015

Access Journal Forum

AES E-Library

Combining Visual and Acoustic Modalities to Ease Speech Recognition by Hearing Impaired People

Document Thumbnail

The aim of the research work presented is to show a system that facilitates speech training for hearing impaired people. The system engineered combines both visual and acoustic speech data acquisition and analysis modules. The Active Shape Model method is used for extracting visual speech features from the shape and movement of the lips. The acoustic features extraction involves mel-cepstral analysis. Artificial Neural Networks are utilized as the classifier, feature vectors extracted combine both modalities of the human speech. Additional experiments with the degraded acoustic and/or visual information are carried out in order to test the system robustness against various distortions affecting the signals.

AES Convention: Paper Number:
Publication Date:

Click to purchase paper or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $20 for non-members, $5 for AES members and is free for E-Library subscribers.

Learn more about the AES E-Library

E-Library Location:

Start a discussion about this paper!

Facebook   Twitter   LinkedIn   Google+   YouTube   RSS News Feeds  
AES - Audio Engineering Society