Boosting the transferability of adversarial attacks with global momentum initialization

被引:3
|
作者
Wang, Jiafeng [1 ]
Chen, Zhaoyu [2 ,3 ]
Jiang, Kaixun [2 ,3 ]
Yang, Dingkang [2 ,3 ]
Hong, Lingyi [1 ]
Guo, Pinxue [2 ,3 ]
Guo, Haijing [1 ]
Zhang, Wenqiang [1 ,2 ,3 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200433, Peoples R China
[2] Fudan Univ, Acad Engn & Technol, Shanghai Engn Res Ctr AI & Robot, Shanghai 200433, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Engn Res Ctr Robot, Minist Educ, Shanghai 200433, Peoples R China
关键词
Adversarial examples; Black-box attacks; Adversarial transferability; Gradient optimization; Robustness;
D O I
10.1016/j.eswa.2024.124757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding human- imperceptible perturbations to the benign inputs. Simultaneously, adversarial examples exhibit transferability across models, enabling practical black-box attacks. However, existing methods are still incapable of achieving the desired transfer attack performance. In this work, focusing on gradient optimization and consistency, we analyze the gradient elimination phenomenon as well as the local momentum optimum dilemma. To tackle these challenges, we introduce Global Momentum Initialization (GI), providing global momentum knowledge to mitigate gradient elimination. Specifically, we perform gradient pre-convergence before the attack and a global search during this stage. GI seamlessly integrates with existing transfer methods, significantly improving the success rate of transfer attacks by an average of 6.4% under various advanced defense mechanisms compared to the state-of-the-art method. Ultimately, GI demonstrates strong transferability in both image and video attack domains. Particularly, when attacking advanced defense methods in the image domain, it achieves an average attack success rate of 95.4%. The code is available at https://github.com/Omenzychen/Global-MomentumInitialization.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Boosting Adversarial Attacks with Momentum
    Dong, Yinpeng
    Liao, Fangzhou
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    Hu, Xiaolin
    Li, Jianguo
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 9185 - 9193
  • [2] Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
    Qin, Zeyu
    Fan, Yanbo
    Liu, Yi
    Shen, Li
    Zhang, Yong
    Wang, Jue
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Boosting the Transferability of Adversarial Attacks With Frequency-Aware Perturbation
    Wang, Yajie
    Wu, Yi
    Wu, Shangbo
    Liu, Ximeng
    Zhou, Wanlei
    Zhu, Liehuang
    Zhang, Chuan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6293 - 6304
  • [4] Improving Transferability of Adversarial Attacks with Gaussian Gradient Enhance Momentum
    Wang, Jinwei
    Wang, Maoyuan
    Wu, Hao
    Ma, Bin
    Luo, Xiangyang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 421 - 432
  • [5] Boosting adversarial attacks with future momentum and future transformation
    Mao, Zhongshu
    Lu, Yiqin
    Cheng, Zhe
    Shen, Xiong
    Zhang, Yang
    Qin, Jiancheng
    COMPUTERS & SECURITY, 2023, 127
  • [6] Boosting the transferability of adversarial attacks with adaptive points selecting in temporal neighborhood
    Zhu, Hegui
    Zheng, Haoran
    Zhu, Ying
    Sui, Xiaoyan
    INFORMATION SCIENCES, 2023, 641
  • [7] Improving the Transferability of Adversarial Attacks through Experienced Precise Nesterov Momentum
    Wu, Hao
    Wang, Jinwei
    Zhang, Jiawei
    Wu, Yufeng
    Ma, Bin
    Luo, Xiangyang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [8] Boosting the transferability of adversarial CAPTCHAs
    Xu, Zisheng
    Yan, Qiao
    COMPUTERS & SECURITY, 2024, 145
  • [9] Probability-Distribution-Guided Adversarial Sample Attacks for Boosting Transferability and Interpretability
    Li, Hongying
    Yu, Miaomiao
    Li, Xiaofei
    Zhang, Jun
    Li, Shuohao
    Lei, Jun
    Huang, Hairong
    MATHEMATICS, 2023, 11 (13)
  • [10] Boosting Adversarial Transferability via Relative Feature Importance-Aware Attacks
    Li, Jian-Wei
    Shao, Wen-Ze
    Sun, Yu-Bao
    Wang, Li-Qian
    Ge, Qi
    Xiao, Liang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3489 - 3504