Your Voice is Not Yours? Black-Box Adversarial Attacks Against Speaker Recognition Systems

被引:0
作者
Ye, Jianbin [1 ]
Lin, Fuqiang [1 ]
Liu, Xiaoyuan [1 ]
Liu, Bo [1 ]
机构
[1] Natl Univ Def Technol, Coll Comp Sci & Technol, Changsha, Peoples R China
来源
2022 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING, ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM | 2022年
关键词
Deep Learning; Speaker Recognition; Adversarial Example; Black-box Attack;
D O I
10.1109/ISPA-BDCloud-SocialCom-SustainCom57177.2022.00094
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speaker recognition (SR) systems play an essential part in many applications, such as speech control access systems and voice verification, where robustness is crucial for daily use. Therefore, the vulnerability of SR systems has become a hot study of interest. Notably, recent studies prove that SR systems are vulnerable to adversarial attacks. These kinds of attacks generate adversarial examples by adding a well-crafted inconspicuous perturbation to the original audio to fool the target model into making false predictions. However, dominant works conduct attacks in the white-box setting, which suffers from limited practices since the model parameters and architectures are usually unavailable in real-world scenarios. To this end, we propose a black-box based framework without requiring details of the target model. We leverage gradient estimation procedure based on natural evolution strategy to generate adversarial examples. The gradient estimation only needs confidence scores and decisions produced by SR systems. We also explore genetic algorithm to guide the direction of example generation, which accelerates model convergence. The experiments demonstrate that our approach can manipulate state-of-the-art SR systems at a high attack success rate of 97.5% with small distortions. Extensive investigations on benchmark datasets, VoxCeleb1, VoxCeleb2, and TIMIT, further verify the effectiveness and stealthiness of our attack method.
引用
收藏
页码:692 / 699
页数:8
相关论文
共 50 条
  • [21] AKD: Using Adversarial Knowledge Distillation to Achieve Black-box Attacks
    Lian, Xin
    Huang, Zhiqiu
    Wang, Chao
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [22] Data reduction for black-box adversarial attacks against deep neural networks based on side-channel attacks
    Zhou, Hanxun
    Liu, Zhihui
    Hu, Yufeng
    Zhang, Shuo
    Kang, Longyu
    Feng, Yong
    Wang, Yan
    Guo, Wei
    Zou, Cliff C.
    COMPUTERS & SECURITY, 2025, 153
  • [23] Representation Learning to Classify and Detect Adversarial Attacks against Speaker and Speech Recognition Systems
    Villalba, Jesus
    Joshi, Sonal
    Zelasko, Piotr
    Dehak, Najim
    INTERSPEECH 2021, 2021, : 4304 - 4308
  • [24] DeeBBAA: A Benchmark Deep Black-Box Adversarial Attack Against CyberPhysical Power Systems
    Bhattacharjee, Arnab
    Bai, Guangdong
    Tushar, Wayes
    Verma, Ashu
    Mishra, Sukumar
    Saha, Tapan K.
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 40670 - 40688
  • [25] GCSA: A New Adversarial Example-Generating Scheme Toward Black-Box Adversarial Attacks
    Fan, Xinxin
    Li, Mengfan
    Zhou, Jia
    Jing, Quanliang
    Lin, Chi
    Lu, Yunfeng
    Bi, Jingping
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 2038 - 2048
  • [26] Orthogonal Deep Models as Defense Against Black-Box Attacks
    Jalwana, Mohammad A. A. K.
    Akhtar, Naveed
    Bennamoun, Mohammed
    Mian, Ajmal
    IEEE ACCESS, 2020, 8 : 119744 - 119757
  • [27] Backdoor Attacks against Voice Recognition Systems: A Survey
    Yan, Baochen
    Lan, Jiahe
    Yan, Zheng
    ACM COMPUTING SURVEYS, 2025, 57 (03)
  • [28] Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data
    Shi, Yi
    Sagduyu, Yalin E.
    Davaslioglu, Kemal
    Li, Jason H.
    2018 IEEE INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY (ISSPIT), 2018, : 453 - 458
  • [29] Adversarial example-based test case generation for black-box speech recognition systems
    Cai, Hanbo
    Zhang, Pengcheng
    Dong, Hai
    Grunske, Lars
    Ji, Shunhui
    Yuan, Tianhao
    SOFTWARE TESTING VERIFICATION & RELIABILITY, 2023, 33 (05)
  • [30] Timing black-box attacks: Crafting adversarial examples through timing leaks against dnns on embedded devices
    Nakai T.
    Suzuki D.
    Fujino T.
    IACR Transactions on Cryptographic Hardware and Embedded Systems, 2021, 2021 (03): : 149 - 175