GM-Attack: Improving the Transferability of Adversarial Attacks

被引:8
|
作者
Hong, Jinbang [1 ,2 ]
Tang, Keke [3 ]
Gao, Chao [2 ]
Wang, Songxin [4 ]
Guo, Sensen [5 ]
Zhu, Peican [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Shaanxi, Peoples R China
[3] Guangzhou Univ, Cyberspace Inst Adv Technol, Guangzhou 510006, Guangdong, Peoples R China
[4] Shanghai Univ Finance & Econ, Sch Informat Management & Engn, Shanghai 200433, Peoples R China
[5] Northwestern Polytech Univ, Sch Cybersecur, Xian 710072, Shaanxi, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Deep neural networks; Adversarial attack; Adversarial examples; Data augmentation; White-box/black-box attack; Transferability;
D O I
10.1007/978-3-031-10989-8_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the real world, blackbox attacks seem to be widely existed due to the lack of detailed information of models to be attacked. Hence, it is desirable to obtain adversarial examples with high transferability which will facilitate practical adversarial attacks. Instead of adopting traditional input transformation approaches, we propose a mechanism to derive masked images through removing some regions from the initial input images. In this manuscript, the removed regions are spatially uniformly distributed squares. For comparison, several transferable attack methods are adopted as the baselines. Eventually, extensive empirical evaluations are conducted on the standard ImageNet dataset to validate the effectiveness of GM-Attack. As indicated, our GM-Attack can craft more transferable adversarial examples compared with other input transformation methods and attack success rate on Inc-v4 has been improved by 6.5% over state-of-the-art methods.
引用
收藏
页码:489 / 500
页数:12
相关论文
共 50 条
  • [41] Improving Adversarial Transferability via Model Alignment
    Ma, Avery
    Farahmand, Amir-Massoud
    Pan, Yangchen
    Torr, Philip
    Gu, Jindong
    COMPUTER VISION - ECCV 2024, PT LXII, 2025, 15120 : 74 - 92
  • [42] Improving the Transferability of Adversarial Examples with Diverse Gradients
    Cao, Yangjie
    Wang, Haobo
    Zhu, Chenxi
    Zhuang, Yan
    Li, Jie
    Chen, Xianfu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [43] Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
    Demontis, Ambra
    Melis, Marco
    Pintor, Maura
    Jagielski, Matthew
    Biggio, Battista
    Oprea, Alina
    Nita-Rotaru, Cristina
    Roli, Fabio
    PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, 2019, : 321 - 338
  • [44] Improving Transferability of Adversarial Samples via Critical Region-Oriented Feature-Level Attack
    Li, Zhiwei
    Ren, Min
    Li, Qi
    Jiang, Fangling
    Sun, Zhenan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6650 - 6664
  • [45] Boosting the Transferability of Adversarial Attacks With Frequency-Aware Perturbation
    Wang, Yajie
    Wu, Yi
    Wu, Shangbo
    Liu, Ximeng
    Zhou, Wanlei
    Zhu, Liehuang
    Zhang, Chuan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6293 - 6304
  • [46] Transferability of features for neural networks links to adversarial attacks and defences
    Kotyan, Shashank
    Matsuki, Moe
    Vargas, Danilo Vasconcellos
    PLOS ONE, 2022, 17 (04):
  • [47] Leveraging transferability and improved beam search in textual adversarial attacks
    Zhu, Bin
    Gu, Zhaoquan
    Qian, Yaguan
    Lau, Francis
    Tian, Zhihong
    NEUROCOMPUTING, 2022, 500 : 135 - 142
  • [48] Unscrambling the Rectification of Adversarial Attacks Transferability across Computer Networks
    Nowroozi, Ehsan
    Ghelichkhani, Samaneh
    Haider, Imran
    Dehghantanha, Ali
    arXiv, 2023,
  • [49] Boosting the transferability of adversarial examples via stochastic serial attack
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Tang, Xue-song
    NEURAL NETWORKS, 2022, 150 : 58 - 67
  • [50] Transferability Analysis of an Adversarial Attack on Gender Classification to Face Recognition
    Rezgui, Zohra
    Bassit, Amina
    PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG 2021), 2021, 315