Machine Learning Applied to Aspirated and Non-Aspirated Allophone Classification—An Approach Based on Audio "Fingerprinting"
×
Cite This
Citation & Abstract
M. Piotrowska, G. Korvel, A. Kurowski, B. Kostek, and A. Czyzewski, "Machine Learning Applied to Aspirated and Non-Aspirated Allophone Classification–An Approach Based on Audio “Fingerprinting”," Paper 10070, (2018 October.). doi:
M. Piotrowska, G. Korvel, A. Kurowski, B. Kostek, and A. Czyzewski, "Machine Learning Applied to Aspirated and Non-Aspirated Allophone Classification–An Approach Based on Audio “Fingerprinting”," Paper 10070, (2018 October.). doi:
Abstract: The purpose of this study is to involve both Convolutional Neural Networks and a typical learning algorithm in the allophone classification process. A list of words including aspirated and non-aspirated allophones pronounced by native and non-native English speakers is recorded and then edited and analyzed. Allophones extracted from English speakers’ recordings are presented in the form of two-dimensional spectrogram images and used as input to train the Convolutional Neural Networks. Various settings of the spectral representation are analyzed to determine adequate option for the allophone classification. Then, testing is performed on the basis of non-native speakers’ utterances. The same approach is repeated employing learning algorithm but based on feature vectors. The archived classification results are promising as high accuracy is observed.
@article{piotrowska2018machine,
author={piotrowska, magdalena and korvel, grazina and kurowski, adam and kostek, bozena and czyzewski, andrzej},
journal={journal of the audio engineering society},
title={machine learning applied to aspirated and non-aspirated allophone classification–an approach based on audio “fingerprinting”},
year={2018},
volume={},
number={},
pages={},
doi={},
month={october},}
@article{piotrowska2018machine,
author={piotrowska, magdalena and korvel, grazina and kurowski, adam and kostek, bozena and czyzewski, andrzej},
journal={journal of the audio engineering society},
title={machine learning applied to aspirated and non-aspirated allophone classification–an approach based on audio “fingerprinting”},
year={2018},
volume={},
number={},
pages={},
doi={},
month={october},
abstract={the purpose of this study is to involve both convolutional neural networks and a typical learning algorithm in the allophone classification process. a list of words including aspirated and non-aspirated allophones pronounced by native and non-native english speakers is recorded and then edited and analyzed. allophones extracted from english speakers’ recordings are presented in the form of two-dimensional spectrogram images and used as input to train the convolutional neural networks. various settings of the spectral representation are analyzed to determine adequate option for the allophone classification. then, testing is performed on the basis of non-native speakers’ utterances. the same approach is repeated employing learning algorithm but based on feature vectors. the archived classification results are promising as high accuracy is observed.},}
TY - paper
TI - Machine Learning Applied to Aspirated and Non-Aspirated Allophone Classification–An Approach Based on Audio “Fingerprinting”
SP -
EP -
AU - Piotrowska, Magdalena
AU - Korvel, Grazina
AU - Kurowski, Adam
AU - Kostek, Bozena
AU - Czyzewski, Andrzej
PY - 2018
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - October 2018
TY - paper
TI - Machine Learning Applied to Aspirated and Non-Aspirated Allophone Classification–An Approach Based on Audio “Fingerprinting”
SP -
EP -
AU - Piotrowska, Magdalena
AU - Korvel, Grazina
AU - Kurowski, Adam
AU - Kostek, Bozena
AU - Czyzewski, Andrzej
PY - 2018
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - October 2018
AB - The purpose of this study is to involve both Convolutional Neural Networks and a typical learning algorithm in the allophone classification process. A list of words including aspirated and non-aspirated allophones pronounced by native and non-native English speakers is recorded and then edited and analyzed. Allophones extracted from English speakers’ recordings are presented in the form of two-dimensional spectrogram images and used as input to train the Convolutional Neural Networks. Various settings of the spectral representation are analyzed to determine adequate option for the allophone classification. Then, testing is performed on the basis of non-native speakers’ utterances. The same approach is repeated employing learning algorithm but based on feature vectors. The archived classification results are promising as high accuracy is observed.
The purpose of this study is to involve both Convolutional Neural Networks and a typical learning algorithm in the allophone classification process. A list of words including aspirated and non-aspirated allophones pronounced by native and non-native English speakers is recorded and then edited and analyzed. Allophones extracted from English speakers’ recordings are presented in the form of two-dimensional spectrogram images and used as input to train the Convolutional Neural Networks. Various settings of the spectral representation are analyzed to determine adequate option for the allophone classification. Then, testing is performed on the basis of non-native speakers’ utterances. The same approach is repeated employing learning algorithm but based on feature vectors. The archived classification results are promising as high accuracy is observed.
Open Access
Authors:
Piotrowska, Magdalena; Korvel, Grazina; Kurowski, Adam; Kostek, Bozena; Czyzewski, Andrzej
Affiliations:
Vilnius University, Vilnius, Lithuania; Gdansk University of Technology, Gdansk, Poland(See document for exact affiliation information.)
AES Convention:
145 (October 2018)
Paper Number:
10070
Publication Date:
October 7, 2018Import into BibTeX
Subject:
Acoustics and Signal Processing
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=19796