Boosting the transferability of adversarial attacks with global momentum initialization

被引:6
作者
Wang, Jiafeng [1 ]
Chen, Zhaoyu [2 ,3 ]
Jiang, Kaixun [2 ,3 ]
Yang, Dingkang [2 ,3 ]
Hong, Lingyi [1 ]
Guo, Pinxue [2 ,3 ]
Guo, Haijing [1 ]
Zhang, Wenqiang [1 ,2 ,3 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200433, Peoples R China
[2] Fudan Univ, Acad Engn & Technol, Shanghai Engn Res Ctr AI & Robot, Shanghai 200433, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Engn Res Ctr Robot, Minist Educ, Shanghai 200433, Peoples R China
关键词
Adversarial examples; Black-box attacks; Adversarial transferability; Gradient optimization; Robustness;
D O I
10.1016/j.eswa.2024.124757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding human- imperceptible perturbations to the benign inputs. Simultaneously, adversarial examples exhibit transferability across models, enabling practical black-box attacks. However, existing methods are still incapable of achieving the desired transfer attack performance. In this work, focusing on gradient optimization and consistency, we analyze the gradient elimination phenomenon as well as the local momentum optimum dilemma. To tackle these challenges, we introduce Global Momentum Initialization (GI), providing global momentum knowledge to mitigate gradient elimination. Specifically, we perform gradient pre-convergence before the attack and a global search during this stage. GI seamlessly integrates with existing transfer methods, significantly improving the success rate of transfer attacks by an average of 6.4% under various advanced defense mechanisms compared to the state-of-the-art method. Ultimately, GI demonstrates strong transferability in both image and video attack domains. Particularly, when attacking advanced defense methods in the image domain, it achieves an average attack success rate of 95.4%. The code is available at https://github.com/Omenzychen/Global-MomentumInitialization.
引用
收藏
页数:10
相关论文
共 51 条
[1]  
[Anonymous], 2017, ICLR
[2]  
Athalye A, 2018, PR MACH LEARN RES, V80
[3]  
Bojarski M, 2016, Arxiv, DOI arXiv:1604.07316
[4]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[5]  
Cohen J, 2019, PR MACH LEARN RES, V97
[6]  
Dong YP, 2018, Arxiv, DOI arXiv:1710.06081
[7]   Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks [J].
Dong, Yinpeng ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4307-4316
[8]  
Dosovitskiy A., 2021, INT C LEARNING REPRE
[9]   SlowFast Networks for Video Recognition [J].
Feichtenhofer, Christoph ;
Fan, Haoqi ;
Malik, Jitendra ;
He, Kaiming .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6201-6210
[10]   Patch-Wise Attack for Fooling Deep Neural Network [J].
Gao, Lianli ;
Zhang, Qilong ;
Song, Jingkuan ;
Liu, Xianglong ;
Shen, Heng Tao .
COMPUTER VISION - ECCV 2020, PT XXVIII, 2020, 12373 :307-322