Boosting the transferability of adversarial attacks with global momentum initialization

被引:3
|
作者
Wang, Jiafeng [1 ]
Chen, Zhaoyu [2 ,3 ]
Jiang, Kaixun [2 ,3 ]
Yang, Dingkang [2 ,3 ]
Hong, Lingyi [1 ]
Guo, Pinxue [2 ,3 ]
Guo, Haijing [1 ]
Zhang, Wenqiang [1 ,2 ,3 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200433, Peoples R China
[2] Fudan Univ, Acad Engn & Technol, Shanghai Engn Res Ctr AI & Robot, Shanghai 200433, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Engn Res Ctr Robot, Minist Educ, Shanghai 200433, Peoples R China
关键词
Adversarial examples; Black-box attacks; Adversarial transferability; Gradient optimization; Robustness;
D O I
10.1016/j.eswa.2024.124757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding human- imperceptible perturbations to the benign inputs. Simultaneously, adversarial examples exhibit transferability across models, enabling practical black-box attacks. However, existing methods are still incapable of achieving the desired transfer attack performance. In this work, focusing on gradient optimization and consistency, we analyze the gradient elimination phenomenon as well as the local momentum optimum dilemma. To tackle these challenges, we introduce Global Momentum Initialization (GI), providing global momentum knowledge to mitigate gradient elimination. Specifically, we perform gradient pre-convergence before the attack and a global search during this stage. GI seamlessly integrates with existing transfer methods, significantly improving the success rate of transfer attacks by an average of 6.4% under various advanced defense mechanisms compared to the state-of-the-art method. Ultimately, GI demonstrates strong transferability in both image and video attack domains. Particularly, when attacking advanced defense methods in the image domain, it achieves an average attack success rate of 95.4%. The code is available at https://github.com/Omenzychen/Global-MomentumInitialization.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] A STUDY ON THE TRANSFERABILITY OF ADVERSARIAL ATTACKS IN SOUND EVENT CLASSIFICATION
    Subramanian, Vinod
    Pankajakshan, Arjun
    Benetos, Emmanouil
    Xu, Ning
    McDonald, SKoT
    Sandler, Mark
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 301 - 305
  • [32] Enhancing the Transferability of Targeted Attacks with Adversarial Perturbation Transform
    Deng, Zhengjie
    Xiao, Wen
    Li, Xiyan
    He, Shuqian
    Wang, Yizhen
    ELECTRONICS, 2023, 12 (18)
  • [33] Enhancing the Transferability of Adversarial Attacks through Variance Tuning
    Wang, Xiaosen
    He, Kun
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1924 - 1933
  • [34] Studying the Transferability of Non-Targeted Adversarial Attacks
    Alvarez, Enrique
    Alvarez, Rafael
    Cazorla, Miguel
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [35] Enhancing the transferability of adversarial attacks with diversified input strategies
    Li Z.
    Chen Y.
    Yang B.
    Li C.
    Zhang S.
    Li W.
    Zhang H.
    Journal of Intelligent and Fuzzy Systems, 2024, 46 (04): : 10359 - 10373
  • [36] GM-Attack: Improving the Transferability of Adversarial Attacks
    Hong, Jinbang
    Tang, Keke
    Gao, Chao
    Wang, Songxin
    Guo, Sensen
    Zhu, Peican
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2022, PT III, 2022, 13370 : 489 - 500
  • [37] On the Transferability of Adversarial Attacks against Neural Text Classifier
    Yuan, Liping
    Zheng, Xiaoqing
    Zhou, Yi
    Hsieh, Cho-Jui
    Chang, Kai-Wei
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1612 - 1625
  • [38] Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
    Demontis, Ambra
    Melis, Marco
    Pintor, Maura
    Jagielski, Matthew
    Biggio, Battista
    Oprea, Alina
    Nita-Rotaru, Cristina
    Roli, Fabio
    PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, 2019, : 321 - 338
  • [39] Boosting Adversarial Transferability With Learnable Patch-Wise Masks
    Wei, Xingxing
    Zhao, Shiji
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3778 - 3787
  • [40] Boosting the transferability of adversarial examples via stochastic serial attack
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Tang, Xue-song
    NEURAL NETWORKS, 2022, 150 : 58 - 67