Boosting Adversarial Transferability via Relative Feature Importance-Aware Attacks

被引:0
|
作者
Li, Jian-Wei [1 ]
Shao, Wen-Ze [1 ]
Sun, Yu-Bao [2 ]
Wang, Li-Qian [1 ]
Ge, Qi [1 ]
Xiao, Liang [3 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Commun & Informat Engn, Jiangsu Key Lab Intelligent Informat Proc & Commun, Nanjing 210003, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Engn Res Ctr Digital Forens, Minist Educ, Nanjing 210044, Peoples R China
[3] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
基金
美国国家科学基金会;
关键词
Boosting; Feature extraction; Closed box; Computational modeling; Backpropagation; Artificial intelligence; Neurons; Glass box; Training; Sun; Deep neural networks; victim model; adversarial transferability; MI-FGSM; intermediate-level attacks;
D O I
10.1109/TIFS.2025.3552030
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Modern deep neural networks are known highly vulnerable to adversarial examples. As a pioneering work, the fast gradient sign method (FGSM) is proved more transferable in black-box attacks than its multi-small-step extension, i.e., iterative-FGSM, particularly being restricted by a limited number of iterations. This paper revisits their early, representative successor MI-FGSM as a baseline, i.e., iterative-FGSM with momentum, and introduces an innovative boosting idea different from either FGSM-inspired algorithms or other mainstream methods. For one thing, during gradient backpropogation of MI-FGSM, the proposed approach merely requires amending the chain rule with respect to adversarial images using the counterpart original images. For another, a credible analysis has revealed that such a naively boosted MI-FGSM essentially performs a special kind of intermediate-layer attacks. In specific, the notable finding in the paper is a new principle of adversarial transferability guided by the relative feature importance, emphasizing the significance of semantically non-critical information for the first time in the literature, although originally thought to be weak in large. Experimental results on various leading victim models, both undefended and defended, demonstrate that the new approach incorporating robust gradients has indeed attained stronger adversarial transferability than state-of-the-art works. The code is available at:https://github.com/ljwooo/RFIA-main.
引用
收藏
页码:3489 / 3504
页数:16
相关论文
共 50 条
  • [1] Feature Importance-aware Transferable Adversarial Attacks
    Wang, Zhibo
    Guo, Hengchang
    Zhang, Zhifei
    Liu, Wenxin
    Qin, Zhan
    Ren, Kui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7619 - 7628
  • [2] Boosting the Transferability of Adversarial Attacks With Frequency-Aware Perturbation
    Wang, Yajie
    Wu, Yi
    Wu, Shangbo
    Liu, Ximeng
    Zhou, Wanlei
    Zhu, Liehuang
    Zhang, Chuan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6293 - 6304
  • [3] Adversarial Adaptive Neighborhood With Feature Importance-Aware Convex Interpolation
    Li, Qian
    Qi, Yong
    Hu, Qingyuan
    Qi, Saiyu
    Lin, Yun
    Dong, Jin Song
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 2447 - 2460
  • [4] Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
    Qin, Zeyu
    Fan, Yanbo
    Liu, Yi
    Shen, Li
    Zhang, Yong
    Wang, Jue
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Boosting Adversarial Transferability via Logits Mixup With Dominant Decomposed Feature
    Weng, Juanjuan
    Luo, Zhiming
    Li, Shaozi
    Lin, Dazhen
    Zhong, Zhun
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 8939 - 8951
  • [6] Boosting Adversarial Transferability Through Intermediate Feature
    He, Chenghai
    Li, Xiaoqian
    Zhang, Xiaohang
    Zhang, Kai
    Li, Hailing
    Xiong, Gang
    Li, Xuan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 28 - 39
  • [7] Enhancing the Transferability of Adversarial Attacks via Multi-Feature Attention
    Zheng, Desheng
    Ke, Wuping
    Li, Xiaoyu
    Duan, Yaoxin
    Yin, Guangqiang
    Min, Fan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 1462 - 1474
  • [8] Boosting the transferability of adversarial attacks with global momentum initialization
    Wang, Jiafeng
    Chen, Zhaoyu
    Jiang, Kaixun
    Yang, Dingkang
    Hong, Lingyi
    Guo, Pinxue
    Guo, Haijing
    Zhang, Wenqiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
  • [9] Boosting the Transferability of Adversarial Samples via Attention
    Wu, Weibin
    Su, Yuxin
    Chen, Xixian
    Zhao, Shenglin
    King, Irwin
    Lyu, Michael R.
    Tai, Yu-Wing
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1158 - 1167
  • [10] Boosting adversarial transferability in vision-language models via multimodal feature heterogeneity
    Chen, Long
    Chen, Yuling
    Ouyang, Zhi
    Dou, Hui
    Zhang, Yangwen
    Sang, Haiwei
    SCIENTIFIC REPORTS, 2025, 15 (01):