The Transformer network has two drawbacks in Automatic Speech Recognition (ASR) tasks. One is that the global features are mainly focused and other useful features, such as local features, are neglected. The other is that it is not robust to the noisy audio signal. In order to improve the model performance in ASR tasks, useful information extraction and noise removal are the main concerns. First, an information extraction module, abbreviated as IE module, is proposed to extract the local context information from the integration of previous layers which contain both low-level information and high-level information. Moreover, a noisy feature pruning (NFP) module is proposed to ease the negative effect caused by noisy audio. Finally, a network called EPT-Net is proposed on the basis of the integration of IE module, NFP module and the Transformer network. Empirical evaluations have been conducted mainly by using two widely used Chinese Mandarin datasets, which are Aishell-1 and HKUST. Experimental results can validate the effectiveness of EPT-Net, whose character error rate (CER) are 5.3%/5.6% of dev/test and 21.9% of dev in these two datasets respectively.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.