A Practical Black-Box Attack Against Autonomous Speech Recognition Model

被引:0
作者
Fan, Wenshu [1 ]
Li, Hongwei [1 ,2 ]
Jiang, Wenbo [1 ]
Xu, Guowen [1 ]
Lu, Rongxing [3 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu, Peoples R China
[2] Cyberspace Secur Res Ctr, Peng Cheng Lab, Shenzhen 518000, Peoples R China
[3] Univ New Brunswick, Fac Comp Sci, Fredericton, NB, Canada
来源
2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM) | 2020年
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Machine Learning; Automatic Speech Recognition; Differential Evolution; Black-Box Attack;
D O I
10.1109/GLOBECOM42002.2020.9348184
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the wild applications of machine learning (ML) technology, automatic speech recognition (ASR) has made great progress in recent years. Despite its great potential, there are various evasion attacks of ML-based ASR, which could affect the security of applications built upon ASR. Up to now, most studies focus on white-box attacks in ASR, and there is almost no attention paid to black-box attacks where attackers can only query the target model to get output labels rather than probability vectors in audio domain. In this paper, we propose an evasion attack against ASR in the above-mentioned situation, which is more feasible in realistic scenarios. Specifically, we first train a substitute model by using data augmentation, which ensures that we have enough samples to train with a small number of times to query the target model. Then, based on the substitute model, we apply Differential Evolution (DE) algorithm to craft adversarial examples and implement black-box attack against ASR models from the Speech Commands dataset. Extensive experiments are conducted, and the results illustrate that our approach achieves untargeted attacks with over 70% success rate while still maintaining the authenticity of the original data well.
引用
收藏
页数:6
相关论文
共 20 条
  • [1] Alzantot M., 2018, NIPS 2017 Machine Deception Workshop
  • [2] On the Robustness of Semantic Segmentation Models to Adversarial Attacks
    Arnab, Anurag
    Miksik, Ondrej
    Torr, Philip H. S.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 888 - 897
  • [3] Biggio B., 2013, Lecture Notes in Comput Sci, DOI [10.1007/978-3-642-40994-325, DOI 10.1007/978-3-642-40994-325, DOI 10.1007/978-3-642-40994-3_25]
  • [4] Carlini M, 2018, UPDATES SURG SER, P1, DOI 10.1007/978-88-470-3955-1
  • [5] Carlini N, 2016, PROCEEDINGS OF THE 25TH USENIX SECURITY SYMPOSIUM, P513
  • [6] Chow B, 2017, PERFORMANCE AND PROFESSIONAL WRESTLING, P1
  • [7] Dalvi N.N., 2004, P 10 ACM SIGKDD INT, P99, DOI [10.1145/ 1014052.1014066, 10.1145/1014052.1014066, DOI 10.1145/1014052.1014066]
  • [8] Goodfellow Ian J., 2015, 3 INT C LEARN REPR I
  • [9] A Novel Z-Network Model Based on Bayesian Network and Z-Number
    Jiang, Wen
    Cao, Ying
    Deng, Xinyang
    [J]. IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2020, 28 (08) : 1585 - 1599
  • [10] Poisoning and Evasion Attacks Against Deep Learning Algorithms in Autonomous Vehicles
    Jiang, Wenbo
    Li, Hongwei
    Liu, Sen
    Luo, Xizhao
    Lu, Rongxing
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (04) : 4439 - 4449