Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise
×
Cite This
Citation & Abstract
M. Dong, D. Yan, and Y. Gong, "Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise," J. Audio Eng. Soc., vol. 71, no. 1/2, pp. 34-44, (2023 January.). doi: https://doi.org/10.17743/jaes.2022.0060
M. Dong, D. Yan, and Y. Gong, "Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise," J. Audio Eng. Soc., vol. 71 Issue 1/2 pp. 34-44, (2023 January.). doi: https://doi.org/10.17743/jaes.2022.0060
Abstract: An automatic speech recognition (ASR) system based on a deep neural network is vulnerable to attack by an adversarial example, especially if the command-dependent ASR fails. A defense method against adversarial examples is proposed to improve the robustness and security of the ASR system. An algorithm of devastation and detection on adversarial examples that can attack current advanced ASR systems is proposed. An advanced text-dependent and command-dependent ASR system is chosen as the target, generating adversarial examples by an optimization-based attack on text-dependent ASR and the genetic-algorithm--based algorithm on command-dependent ASR. The method is based on input transformation of adversarial examples. Different random intensities and kinds of noise are added to adversarial examples to devastate the perturbation previously added to normal examples. Experimental results show that the method performs well. For the devastation of examples, the original speech similarity after adding noise can reach 99.68%, the similarity of adversarial examples can reach zero, and the detection rate of adversarial examples can reach 94%.
@article{dong2023adversarial,
author={dong, mingyu and yan, diqun and gong, yongkang},
journal={journal of the audio engineering society},
title={adversarial example devastation and detection on speech recognition system by adding random noise},
year={2023},
volume={71},
number={1/2},
pages={34-44},
doi={https://doi.org/10.17743/jaes.2022.0060},
month={january},}
@article{dong2023adversarial,
author={dong, mingyu and yan, diqun and gong, yongkang},
journal={journal of the audio engineering society},
title={adversarial example devastation and detection on speech recognition system by adding random noise},
year={2023},
volume={71},
number={1/2},
pages={34-44},
doi={https://doi.org/10.17743/jaes.2022.0060},
month={january},
abstract={an automatic speech recognition (asr) system based on a deep neural network is vulnerable to attack by an adversarial example, especially if the command-dependent asr fails. a defense method against adversarial examples is proposed to improve the robustness and security of the asr system. an algorithm of devastation and detection on adversarial examples that can attack current advanced asr systems is proposed. an advanced text-dependent and command-dependent asr system is chosen as the target, generating adversarial examples by an optimization-based attack on text-dependent asr and the genetic-algorithm--based algorithm on command-dependent asr. the method is based on input transformation of adversarial examples. different random intensities and kinds of noise are added to adversarial examples to devastate the perturbation previously added to normal examples. experimental results show that the method performs well. for the devastation of examples, the original speech similarity after adding noise can reach 99.68%, the similarity of adversarial examples can reach zero, and the detection rate of adversarial examples can reach 94%.},}
TY - paper
TI - Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise
SP - 34
EP - 44
AU - Dong, Mingyu
AU - Yan, Diqun
AU - Gong, Yongkang
PY - 2023
JO - Journal of the Audio Engineering Society
IS - 1/2
VO - 71
VL - 71
Y1 - January 2023
TY - paper
TI - Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise
SP - 34
EP - 44
AU - Dong, Mingyu
AU - Yan, Diqun
AU - Gong, Yongkang
PY - 2023
JO - Journal of the Audio Engineering Society
IS - 1/2
VO - 71
VL - 71
Y1 - January 2023
AB - An automatic speech recognition (ASR) system based on a deep neural network is vulnerable to attack by an adversarial example, especially if the command-dependent ASR fails. A defense method against adversarial examples is proposed to improve the robustness and security of the ASR system. An algorithm of devastation and detection on adversarial examples that can attack current advanced ASR systems is proposed. An advanced text-dependent and command-dependent ASR system is chosen as the target, generating adversarial examples by an optimization-based attack on text-dependent ASR and the genetic-algorithm--based algorithm on command-dependent ASR. The method is based on input transformation of adversarial examples. Different random intensities and kinds of noise are added to adversarial examples to devastate the perturbation previously added to normal examples. Experimental results show that the method performs well. For the devastation of examples, the original speech similarity after adding noise can reach 99.68%, the similarity of adversarial examples can reach zero, and the detection rate of adversarial examples can reach 94%.
An automatic speech recognition (ASR) system based on a deep neural network is vulnerable to attack by an adversarial example, especially if the command-dependent ASR fails. A defense method against adversarial examples is proposed to improve the robustness and security of the ASR system. An algorithm of devastation and detection on adversarial examples that can attack current advanced ASR systems is proposed. An advanced text-dependent and command-dependent ASR system is chosen as the target, generating adversarial examples by an optimization-based attack on text-dependent ASR and the genetic-algorithm--based algorithm on command-dependent ASR. The method is based on input transformation of adversarial examples. Different random intensities and kinds of noise are added to adversarial examples to devastate the perturbation previously added to normal examples. Experimental results show that the method performs well. For the devastation of examples, the original speech similarity after adding noise can reach 99.68%, the similarity of adversarial examples can reach zero, and the detection rate of adversarial examples can reach 94%.
Authors:
Dong, Mingyu; Yan, Diqun; Gong, Yongkang
Affiliation:
College of Information Science and Engineering, Ningbo University, Ningbo Zhejiang, China JAES Volume 71 Issue 1/2 pp. 34-44; January 2023
Publication Date:
January 16, 2023Import into BibTeX
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=22029