Multi-layer Feature Augmentation Based Transferable Adversarial Examples Generation for Speaker Recognition

被引:0
|
作者
Li, Zhuhai [1 ]
Zhang, Jie [1 ]
Guo, Wu [1 ]
机构
[1] Univ Sci & Technol China, NERC SLIP, Hefei 230027, Peoples R China
来源
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024 | 2024年 / 14865卷
关键词
Adversarial Attack; Transferability; Speaker Recognition;
D O I
10.1007/978-981-97-5591-2_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial examples that almost remain imperceptible for human can mislead practical speaker recognition systems. However, most existing adversaries generated by substitute models have a poor transferability to attack the unseen victim models. To tackle this problem, in this work we propose a multilayer feature augmentation method to improve the transferability of adversarial examples. Specifically, we apply data augmentation on the intermediate-layer feature maps of the substitute model to create diverse pseudo victim models. By attacking the ensemble of the substitute model and the corresponding augmented models, the proposed method can help the adversarial examples avoid overfitting, resulting in more transferable adversarial examples. Experimental results on the VoxCeleb dataset verify the effectiveness of the proposed approach for the speaker identification and speaker verification tasks.
引用
收藏
页码:373 / 385
页数:13
相关论文
共 50 条
  • [1] Feature-Based Adversarial Training for Deep Learning Models Resistant to Transferable Adversarial Examples
    Ryu, Gwonsang
    Choi, Daeseon
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (05) : 1039 - 1049
  • [2] Rethinking multi-spatial information for transferable adversarial attacks on speaker recognition systems
    Zhang, Junjian
    Tan, Hao
    Wang, Le
    Qian, Yaguan
    Gu, Zhaoquan
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2024, 9 (03) : 620 - 631
  • [3] Transferable universal adversarial perturbations against speaker recognition systems
    Liu, Xiaochen
    Tan, Hao
    Zhang, Junjian
    Li, Aiping
    Gu, Zhaoquan
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2024, 27 (03):
  • [4] Transferable adversarial examples based on global smooth perturbations
    Liu, Yujia
    Jiang, Ming
    Jiang, Tingting
    COMPUTERS & SECURITY, 2022, 121
  • [5] Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
    Chen, Guangke
    Zhao, Zhe
    Song, Fu
    Chen, Sen
    Fan, Lingling
    Wang, Feng
    Wang, Jiashui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (05) : 3970 - 3987
  • [6] CFOA: Exploring transferable adversarial examples by content feature optimization with attention guidance
    Liu, Wanping
    Wang, Baojuan
    Huang, Dong
    Luo, Haolan
    Lu, Ling
    COMPUTERS & SECURITY, 2024, 142
  • [7] Boosting the Transferability of Adversarial Examples with Gradient-Aligned Ensemble Attack for Speaker Recognition
    Li, Zhuhai
    Zhang, Jie
    Guo, Wu
    Wu, Haochen
    INTERSPEECH 2024, 2024, : 532 - 536
  • [8] Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding
    Zhou H.
    Wang Y.
    Tan Y.-A.
    Wu S.
    Zhao Y.
    Zhang Q.
    Li Y.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (01): : 412 - 419
  • [9] NRI-FGSM: An Efficient Transferable Adversarial Attack Method for Speaker Recognition System
    Tan, Hao
    Zhang, Junjian
    Zhang, Huan
    Wang, Le
    Qian, Yaguan
    Gu, Zhaoquan
    INTERSPEECH 2022, 2022, : 4386 - 4390
  • [10] Adversarial Training for Multi-domain Speaker Recognition
    Wang, Qing
    Rao, Wei
    Guo, Pengcheng
    Xie, Lei
    2021 12TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2021,