Simple Techniques are Sufficient for Boosting Adversarial Transferability

被引:0
|
作者
Zhang, Chaoning [1 ]
Benz, Philipp [2 ]
Karjauv, Adil [3 ]
Kweon, In So [3 ]
Hong, Choong Seon [1 ]
机构
[1] Kyung Hee Univ, Seoul, South Korea
[2] Deeping Source, Seoul, South Korea
[3] Korea Adv Inst Sci & Technol, Daejeon, South Korea
基金
新加坡国家研究基金会;
关键词
Adversarial Transferability; Transferable Attacks; Targeted Attacks;
D O I
10.1145/3581783.3612598
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transferable targeted adversarial attack against deep image classifiers has remained an open issue. Depending on the space to optimize the loss, the existing methods can be divided into two categories: (a) feature space attack and (b) output space attack. The feature space attack outperforms output space one by a large margin but at the cost of requiring the training of layer-wise auxiliary classifiers for each corresponding target class together with the greedy search for the optimal layers. In this work, we revisit the method of output space attack and improve it from two perspectives. First, we identify over-fitting as one major factor that hinders transferability, for which we propose to augment the network input and/or feature layers with noise. Second, we propose a new cross-entropy loss with two ends: one for pushing the sample far from the source class, i.e. ground-truth class, and the other for pulling it close to the target class. We demonstrate that simple techniques are sufficient enough for achieving very competitive performance.
引用
收藏
页码:8486 / 8494
页数:9
相关论文
共 50 条
  • [1] Boosting the transferability of adversarial CAPTCHAs
    Xu, Zisheng
    Yan, Qiao
    COMPUTERS & SECURITY, 2024, 145
  • [2] StyLess: Boosting the Transferability of Adversarial Examples
    Liang, Kaisheng
    Xiao, Bin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8163 - 8172
  • [3] Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
    Qin, Zeyu
    Fan, Yanbo
    Liu, Yi
    Shen, Li
    Zhang, Yong
    Wang, Jue
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [4] An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability
    Chen, Bin
    Yin, Jiali
    Chen, Shukai
    Chen, Bohao
    Liu, Ximeng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4466 - 4475
  • [5] Boosting the Transferability of Adversarial Samples via Attention
    Wu, Weibin
    Su, Yuxin
    Chen, Xixian
    Zhao, Shenglin
    King, Irwin
    Lyu, Michael R.
    Tai, Yu-Wing
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1158 - 1167
  • [6] Boosting Adversarial Transferability Through Intermediate Feature
    He, Chenghai
    Li, Xiaoqian
    Zhang, Xiaohang
    Zhang, Kai
    Li, Hailing
    Xiong, Gang
    Li, Xuan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 28 - 39
  • [7] Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability
    Xiong, Yifeng
    Lin, Jiadong
    Zhang, Min
    Hopcroft, John E.
    He, Kun
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14963 - 14972
  • [8] Boosting the transferability of adversarial attacks with global momentum initialization
    Wang, Jiafeng
    Chen, Zhaoyu
    Jiang, Kaixun
    Yang, Dingkang
    Hong, Lingyi
    Guo, Pinxue
    Guo, Haijing
    Zhang, Wenqiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
  • [9] Gradient Aggregation Boosting Adversarial Examples Transferability Method
    Deng, Shiyun
    Ling, Jie
    Computer Engineering and Applications, 2024, 60 (14) : 275 - 282
  • [10] Boosting Adversarial Transferability by Achieving Flat Local Maxima
    Ge, Zhijin
    Liu, Hongying
    Wang, Xiaosen
    Shang, Fanhua
    Liu, Yuanyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,