AES E-Library

AES E-Library

Dual-Residual Transformer Network for Speech Recognition

Document Thumbnail

The Transformer, an attention-based encoder-decoder network, has recently become the prevailing model for automatic speech recognition because of its high recognition accuracy. However, the convergence speed of the Transformer is not that optimal. In order to address this problem, a structure called Dual-Residual Transformer Network (DRTNet), which has fast convergence speed, is proposed. In DRTNet, a direct path is added in the encoder and decoder layers to propagate features with the inspiration of the structure proposed in ResNet. Moreover, this architecture can also fuse features, which tends to improve the model performance. Specifically, the input of the current layer is the integration of the input and output of the previous layer. Empirical evaluation of the proposed DRTNet has been conducted on two public datasets, which are AISHELL-1 and HKUST, respectively. Experimental results on these two datasets show that DRTNet has faster convergence speed and better performance.

Authors:
Affiliation:
JAES Volume 70 Issue 10 pp. 871-881; October 2022
Publication Date:
Permalink: https://www.aes.org/e-lib/browse.cfm?elib=22013

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Learn more about the AES E-Library

E-Library Location:

DOI:

Start a discussion about this paper!


AES - Audio Engineering Society