UNIVERSAL ADVERSARIAL ATTACK AGAINST SPEAKER RECOGNITION MODELS

被引:0
作者
Hanina, Shoham [1 ]
Zolfi, Alon [1 ]
Elovici, Yuval [1 ]
Shabtai, Asaf [1 ]
机构
[1] Ben Gurion Univ Negev, Negev, Israel
来源
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024 | 2024年
关键词
Speaker Recognition; Adversarial Attack;
D O I
10.1109/ICASSP48485.2024.10447073
中图分类号
学科分类号
摘要
In recent years, deep learning-based speaker recognition (SR) models have received a large amount of attention from the machine learning (ML) community. Their increasing popularity derives in large part from their effectiveness in identifying speakers in many security-sensitive applications. Researchers have attempted to challenge the robustness of SR models, and they have revealed the models' vulnerability to adversarial ML attacks. However, the studies performed mainly proposed tailor-made perturbations that are only effective for the speakers they were trained on (i.e., a closed-set). In this paper, we propose the Anonymous Speakers attack, a universal adversarial perturbation that fools SR models on all speakers in an open-set environment, i.e., including speakers that were not part of the training phase of the attack. Using a custom optimization process, we craft a single perturbation that can be applied to the original recording of any speaker and results in misclassification by the SR model. We examined the attack's effectiveness on various state-of-the-art SR models with a wide range of speaker identities. The results of our experiments show that our attack largely reduces the embeddings' similarity to the speaker's original embedding representation while maintaining a high signal-to-noise ratio value.
引用
收藏
页码:4860 / 4864
页数:5
相关论文
共 50 条
  • [21] Adversarial Separation Network for Speaker Recognition
    Zhang, Hanyi
    Wang, Longbiao
    Zhang, Yunchun
    Liu, Meng
    Lee, Kong Aik
    Wei, Jianguo
    INTERSPEECH 2020, 2020, : 951 - 955
  • [22] Neural adversarial learning for speaker recognition
    Chien, Jen-Tzung
    Peng, Kang-Ting
    COMPUTER SPEECH AND LANGUAGE, 2019, 58 : 422 - 440
  • [23] Timbre-Reserved Adversarial Attack in Speaker Identification
    Wang, Qing
    Yao, Jixun
    Zhang, Li
    Guo, Pengcheng
    Xie, Lei
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 3848 - 3858
  • [24] Adversarial Attack against Modeling Attack on PUFs
    Wang, Sying-Jyan
    Chen, Yu-Shen
    Li, Katherine Shu-Min
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [25] SiFDetectCracker: An Adversarial Attack Against Fake Voice Detection Based on Speaker-Irrelative Features
    Hai, Xuan
    Liu, Xin
    Tan, Yuan
    Zhou, Qingguo
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8552 - 8560
  • [26] Representation Learning to Classify and Detect Adversarial Attacks against Speaker and Speech Recognition Systems
    Villalba, Jesus
    Joshi, Sonal
    Zelasko, Piotr
    Dehak, Najim
    INTERSPEECH 2021, 2021, : 4304 - 4308
  • [27] An Empirical Study of Fully Black-Box and Universal Adversarial Attack for SAR Target Recognition
    Peng, Bowen
    Peng, Bo
    Yong, Shaowei
    Liu, Li
    REMOTE SENSING, 2022, 14 (16)
  • [28] Transferable adversarial distribution learning: Query-efficient adversarial attack against large language models
    Dong, Huoyuan
    Dong, Jialiang
    Wan, Shaohua
    Yuan, Shuai
    Guan, Zhitao
    COMPUTERS & SECURITY, 2023, 135
  • [29] Symmetric Saliency-Based Adversarial Attack to Speaker Identification
    Yao, Jiadi
    Chen, Xing
    Zhang, Xiao-Lei
    Zhang, Wei-Qiang
    Yang, Kunde
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1 - 5
  • [30] An Extensive Study on Adversarial Attack against Pre-trained Models of Code
    Du, Xiaohu
    Wen, Ming
    Wei, Zichao
    Wang, Shangwen
    Jin, Hai
    PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023, 2023, : 489 - 501