Boosting Adversarial Transferability via Gradient Relevance Attack

被引:17
|
作者
Zhu, Hegui [1 ]
Ren, Yuchen [1 ]
Sui, Xiaoyan [1 ]
Yang, Lianping [1 ]
Jiang, Wuming [2 ]
机构
[1] Northeastern Univ, Coll Sci, Shenyang, Peoples R China
[2] Beijing EyeCool Technol, Beijing, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV | 2023年
关键词
D O I
10.1109/ICCV51070.2023.00437
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Plentiful adversarial attack researches have revealed the fragility of deep neural networks (DNNs), where the imperceptible perturbations can cause drastic changes in the output. Among the diverse types of attack methods, gradient-based attacks are powerful and easy to implement, arousing wide concern for the security problem of DNNs. However, under the black-box setting, the existing gradient-based attacks have much trouble in breaking through DNN models with defense technologies, especially those adversarially trained models. To make adversarial examples more transferable, in this paper, we explore the fluctuation phenomenon on the plus-minus sign of the adversarial perturbations' pixels during the generation of adversarial examples, and propose an ingenious Gradient Relevance Attack (GRA). Specifically, two gradient relevance frameworks are presented to better utilize the information in the neighborhood of the input, which can correct the update direction adaptively. Then we adjust the update step at each iteration with a decay indicator to counter the fluctuation. Experiment results on a subset of the ILSVRC 2012 validation set forcefully verify the effectiveness of GRA. Furthermore, the attack success rates of 68.7% and 64.8% on Tencent Cloud and Baidu AI Cloud further indicate that GRA can craft adversarial examples with the ability to transfer across both datasets and model architectures. Code is released at https://github.com/RYC-98/GRA.
引用
收藏
页码:4718 / 4727
页数:10
相关论文
共 50 条
  • [1] Boosting the transferability of adversarial examples via stochastic serial attack
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Tang, Xue-song
    NEURAL NETWORKS, 2022, 150 : 58 - 67
  • [2] An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability
    Chen, Bin
    Yin, Jiali
    Chen, Shukai
    Chen, Bohao
    Liu, Ximeng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4466 - 4475
  • [3] Boosting the Transferability of Adversarial Examples with Gradient-Aligned Ensemble Attack for Speaker Recognition
    Li, Zhuhai
    Zhang, Jie
    Guo, Wu
    Wu, Haochen
    INTERSPEECH 2024, 2024, : 532 - 536
  • [4] Boosting the Transferability of Ensemble Adversarial Attack via Stochastic Average Variance Descent
    Zhao, Lei
    Liu, Zhizhi
    Wu, Sixing
    Chen, Wei
    Wu, Liwen
    Pu, Bin
    Yao, Shaowen
    IET INFORMATION SECURITY, 2024, 2024
  • [5] Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability
    Xiong, Yifeng
    Lin, Jiadong
    Zhang, Min
    Hopcroft, John E.
    He, Kun
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14963 - 14972
  • [6] Gradient Aggregation Boosting Adversarial Examples Transferability Method
    Deng, Shiyun
    Ling, Jie
    Computer Engineering and Applications, 2024, 60 (14) : 275 - 282
  • [7] Boosting the Transferability of Adversarial Samples via Attention
    Wu, Weibin
    Su, Yuxin
    Chen, Xixian
    Zhao, Shenglin
    King, Irwin
    Lyu, Michael R.
    Tai, Yu-Wing
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1158 - 1167
  • [8] Boosting Adversarial Transferability with Shallow-Feature Attack on SAR Images
    Lin, Gengyou
    Pan, Zhisong
    Zhou, Xingyu
    Duan, Yexin
    Bai, Wei
    Zhan, Dazhi
    Zhu, Leqian
    Zhao, Gaoqiang
    Li, Tao
    REMOTE SENSING, 2023, 15 (10)
  • [9] Decreasing adversarial transferability using gradient information of attack paths
    Xu, Mengjun
    Liu, Lei
    Xia, Pengfei
    Li, Ziqiang
    Li, Bin
    APPLIED SOFT COMPUTING, 2025, 170
  • [10] Boosting the transferability of adversarial CAPTCHAs
    Xu, Zisheng
    Yan, Qiao
    COMPUTERS & SECURITY, 2024, 145