GM-Attack: Improving the Transferability of Adversarial Attacks

被引:8
作者
Hong, Jinbang [1 ,2 ]
Tang, Keke [3 ]
Gao, Chao [2 ]
Wang, Songxin [4 ]
Guo, Sensen [5 ]
Zhu, Peican [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Shaanxi, Peoples R China
[3] Guangzhou Univ, Cyberspace Inst Adv Technol, Guangzhou 510006, Guangdong, Peoples R China
[4] Shanghai Univ Finance & Econ, Sch Informat Management & Engn, Shanghai 200433, Peoples R China
[5] Northwestern Polytech Univ, Sch Cybersecur, Xian 710072, Shaanxi, Peoples R China
来源
KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2022, PT III | 2022年 / 13370卷
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Deep neural networks; Adversarial attack; Adversarial examples; Data augmentation; White-box/black-box attack; Transferability;
D O I
10.1007/978-3-031-10989-8_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the real world, blackbox attacks seem to be widely existed due to the lack of detailed information of models to be attacked. Hence, it is desirable to obtain adversarial examples with high transferability which will facilitate practical adversarial attacks. Instead of adopting traditional input transformation approaches, we propose a mechanism to derive masked images through removing some regions from the initial input images. In this manuscript, the removed regions are spatially uniformly distributed squares. For comparison, several transferable attack methods are adopted as the baselines. Eventually, extensive empirical evaluations are conducted on the standard ImageNet dataset to validate the effectiveness of GM-Attack. As indicated, our GM-Attack can craft more transferable adversarial examples compared with other input transformation methods and attack success rate on Inc-v4 has been improved by 6.5% over state-of-the-art methods.
引用
收藏
页码:489 / 500
页数:12
相关论文
共 21 条
[1]  
Chen PG, 2024, Arxiv, DOI arXiv:2001.04086
[2]   Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks [J].
Dong, Yinpeng ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4307-4316
[3]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[4]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448
[5]  
Goodfellow I. J., 2015, ICLR
[6]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[7]  
Hopcroft John E., 2019, INT C LEARNING REPRE
[8]   Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting [J].
Zou, Junhua ;
Pan, Zhisong ;
Qiu, Junyang ;
Liu, Xin ;
Rui, Ting ;
Li, Wei .
COMPUTER VISION - ECCV 2020, PT XXII, 2020, 12367 :563-579
[9]  
Kurakin A, 2017, Arxiv, DOI arXiv:1607.02533
[10]  
Li YW, 2020, AAAI CONF ARTIF INTE, V34, P11458