Your Voice is Not Yours? Black-Box Adversarial Attacks Against Speaker Recognition Systems

被引:0
作者
Ye, Jianbin [1 ]
Lin, Fuqiang [1 ]
Liu, Xiaoyuan [1 ]
Liu, Bo [1 ]
机构
[1] Natl Univ Def Technol, Coll Comp Sci & Technol, Changsha, Peoples R China
来源
2022 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING, ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM | 2022年
关键词
Deep Learning; Speaker Recognition; Adversarial Example; Black-box Attack;
D O I
10.1109/ISPA-BDCloud-SocialCom-SustainCom57177.2022.00094
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speaker recognition (SR) systems play an essential part in many applications, such as speech control access systems and voice verification, where robustness is crucial for daily use. Therefore, the vulnerability of SR systems has become a hot study of interest. Notably, recent studies prove that SR systems are vulnerable to adversarial attacks. These kinds of attacks generate adversarial examples by adding a well-crafted inconspicuous perturbation to the original audio to fool the target model into making false predictions. However, dominant works conduct attacks in the white-box setting, which suffers from limited practices since the model parameters and architectures are usually unavailable in real-world scenarios. To this end, we propose a black-box based framework without requiring details of the target model. We leverage gradient estimation procedure based on natural evolution strategy to generate adversarial examples. The gradient estimation only needs confidence scores and decisions produced by SR systems. We also explore genetic algorithm to guide the direction of example generation, which accelerates model convergence. The experiments demonstrate that our approach can manipulate state-of-the-art SR systems at a high attack success rate of 97.5% with small distortions. Extensive investigations on benchmark datasets, VoxCeleb1, VoxCeleb2, and TIMIT, further verify the effectiveness and stealthiness of our attack method.
引用
收藏
页码:692 / 699
页数:8
相关论文
共 50 条
[1]   Efficient Black-Box Adversarial Attacks with Training Surrogate Models Towards Speaker Recognition Systems [J].
Wang, Fangwei ;
Song, Ruixin ;
Li, Qingru ;
Wang, Changguang .
ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT V, 2024, 14491 :257-276
[2]   Practical Adversarial Attacks Against Speaker Recognition Systems [J].
Li, Zhuohang ;
Shi, Cong ;
Xie, Yi ;
Liu, Jian ;
Yuan, Bo ;
Chen, Yingying .
PROCEEDINGS OF THE 21ST INTERNATIONAL WORKSHOP ON MOBILE COMPUTING SYSTEMS AND APPLICATIONS (HOTMOBILE'20), 2020, :9-14
[3]   Ensemble adversarial black-box attacks against deep learning systems [J].
Hang, Jie ;
Han, Keji ;
Chen, Hui ;
Li, Yun .
PATTERN RECOGNITION, 2020, 101
[4]   Black-box Adversarial Attacks on Video Recognition Models [J].
Jiang, Linxi ;
Ma, Xingjun ;
Chen, Shaoxiang ;
Bailey, James ;
Jiang, Yu-Gang .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :864-872
[5]   Physical Black-Box Adversarial Attacks Through Transformations [J].
Jiang, Wenbo ;
Li, Hongwei ;
Xu, Guowen ;
Zhang, Tianwei ;
Lu, Rongxing .
IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (03) :964-974
[6]   Black-box attacks against log anomaly detection with adversarial examples [J].
Lu, Siyang ;
Wang, Mingquan ;
Wang, Dongdong ;
Wei, Xiang ;
Xiao, Sizhe ;
Wang, Zhiwei ;
Han, Ningning ;
Wang, Liqiang .
INFORMATION SCIENCES, 2023, 619 :249-262
[7]   Black-box adversarial attacks against image quality assessment models [J].
Ran, Yu ;
Zhang, Ao-Xiang ;
Li, Mingjie ;
Tang, Weixuan ;
Wang, Yuan-Gen .
EXPERT SYSTEMS WITH APPLICATIONS, 2025, 260
[8]   Sensitive region-aware black-box adversarial attacks [J].
Lin, Chenhao ;
Han, Sicong ;
Zhu, Jiongli ;
Li, Qian ;
Shen, Chao ;
Zhang, Youwei ;
Guan, Xiaohong .
INFORMATION SCIENCES, 2023, 637
[9]   Boosting Black-Box Adversarial Attacks with Meta Learning [J].
Fu, Junjie ;
Sun, Jian ;
Wang, Gang .
2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, :7308-7313
[10]   A review of black-box adversarial attacks on image classification [J].
Zhu, Yanfei ;
Zhao, Yaochi ;
Hu, Zhuhua ;
Luo, Tan ;
He, Like .
NEUROCOMPUTING, 2024, 610