Your Voice is Not Yours? Black-Box Adversarial Attacks Against Speaker Recognition Systems

被引:0
作者
Ye, Jianbin [1 ]
Lin, Fuqiang [1 ]
Liu, Xiaoyuan [1 ]
Liu, Bo [1 ]
机构
[1] Natl Univ Def Technol, Coll Comp Sci & Technol, Changsha, Peoples R China
来源
2022 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING, ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM | 2022年
关键词
Deep Learning; Speaker Recognition; Adversarial Example; Black-box Attack;
D O I
10.1109/ISPA-BDCloud-SocialCom-SustainCom57177.2022.00094
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speaker recognition (SR) systems play an essential part in many applications, such as speech control access systems and voice verification, where robustness is crucial for daily use. Therefore, the vulnerability of SR systems has become a hot study of interest. Notably, recent studies prove that SR systems are vulnerable to adversarial attacks. These kinds of attacks generate adversarial examples by adding a well-crafted inconspicuous perturbation to the original audio to fool the target model into making false predictions. However, dominant works conduct attacks in the white-box setting, which suffers from limited practices since the model parameters and architectures are usually unavailable in real-world scenarios. To this end, we propose a black-box based framework without requiring details of the target model. We leverage gradient estimation procedure based on natural evolution strategy to generate adversarial examples. The gradient estimation only needs confidence scores and decisions produced by SR systems. We also explore genetic algorithm to guide the direction of example generation, which accelerates model convergence. The experiments demonstrate that our approach can manipulate state-of-the-art SR systems at a high attack success rate of 97.5% with small distortions. Extensive investigations on benchmark datasets, VoxCeleb1, VoxCeleb2, and TIMIT, further verify the effectiveness and stealthiness of our attack method.
引用
收藏
页码:692 / 699
页数:8
相关论文
共 50 条
[31]   Adversarial example-based test case generation for black-box speech recognition systems [J].
Cai, Hanbo ;
Zhang, Pengcheng ;
Dong, Hai ;
Grunske, Lars ;
Ji, Shunhui ;
Yuan, Tianhao .
SOFTWARE TESTING VERIFICATION & RELIABILITY, 2023, 33 (05)
[32]   Timing black-box attacks: Crafting adversarial examples through timing leaks against dnns on embedded devices [J].
Nakai T. ;
Suzuki D. ;
Fujino T. .
IACR Transactions on Cryptographic Hardware and Embedded Systems, 2021, 2021 (03) :149-175
[33]   Black-box Evolutionary Search for Adversarial Examples against Deep Image Classifiers in Non-Targeted Attacks [J].
Prochazka, Stepan ;
Neruda, Roman .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[34]   A CMA-ES-Based Adversarial Attack Against Black-Box Object Detectors [J].
Lyu Haoran ;
Tan Yu'an ;
Xue Yuan ;
Wang Yajie ;
Xue Jingfeng .
CHINESE JOURNAL OF ELECTRONICS, 2021, 30 (03) :406-412
[35]   Dual stage black-box adversarial attack against vision transformer [J].
Wang, Fan ;
Shao, Mingwen ;
Meng, Lingzhuang ;
Liu, Fukang .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (08) :3367-3378
[36]   Generating Black-Box Adversarial Examples in Sparse Domain [J].
Zanddizari, Hadi ;
Zeinali, Behnam ;
Chang, J. Morris .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (04) :795-804
[37]   IMPGA: An Effective and Imperceptible Black-Box Attack Against Automatic Speech Recognition Systems [J].
Liang, Luopu ;
Guo, Bowen ;
Lian, Zhichao ;
Li, Qianmu ;
Jing, Huiyun .
WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 :349-363
[38]   Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence [J].
Hong, Hanbin ;
Zhang, Xinyu ;
Wang, Binghui ;
Ba, Zhongjie ;
Hong, Yuan .
PROCEEDINGS OF THE 2024 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2024, 2024, :600-614
[39]   Automatic Selection Attacks Framework for Hard Label Black-Box Models [J].
Liu, Xiaolei ;
Li, Xiaoyu ;
Zheng, Desheng ;
Bai, Jiayu ;
Peng, Yu ;
Zhang, Shibin .
IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,
[40]   Black-box attack against handwritten signature verification with region-restricted adversarial perturbations [J].
Li, Haoyang ;
Li, Heng ;
Zhang, Hansong ;
Yuan, Wei .
PATTERN RECOGNITION, 2021, 111