Meta Gradient Adversarial Attack

被引:55
作者
Yuan, Zheng [1 ,2 ]
Zhang, Jie [1 ,2 ]
Jia, Yunpei [1 ,2 ]
Tan, Chuanqi [3 ]
Xue, Tao [3 ]
Shan, Shiguang [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Tencent, Shenzhen, Peoples R China
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
基金
国家重点研发计划;
关键词
D O I
10.1109/ICCV48922.2021.00765
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, research on adversarial attacks has become a hot spot. Although current literature on the transfer-based adversarial attack has achieved promising results for improving the transferability to unseen black-box models, it still leaves a long way to go. Inspired by the idea of meta-learning, this paper proposes a novel architecture called Meta Gradient Adversarial Attack (MGAA), which is plug-and-play and can be integrated with any existing gradient-based attack method for improving the cross-model transferability. Specifically, we randomly sample multiple models from a model zoo to compose different tasks and iteratively simulate a white-box attack and a black-box attack in each task. By narrowing the gap between the gradient directions in white-box and black-box attacks, the transferability of adversarial examples on the black-box setting can be improved. Extensive experiments on the CIFAR10 and ImageNet datasets show that our architecture outperforms the state-of-the-art methods for both black-box and white-box attack settings.
引用
收藏
页码:7728 / 7737
页数:10
相关论文
共 54 条
[1]  
[Anonymous], 2019, IMPROVING ADVERSARIA
[2]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.90
[3]  
[Anonymous], 2017, Shake-shake regularization
[4]  
[Anonymous], 2018, Query-efficient hard-label blackbox attack:an optimization-based approach
[5]  
[Anonymous], 2015, IEEE C COMPUTER VISI
[6]  
BRENDEL W., 2017, PROC 6 INT C LEARN R
[7]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[8]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[9]  
Cheng SY, 2019, ADV NEUR IN, V32
[10]  
Cohen Jeremy, 2019, Certified Adversarial Robustness via Randomized Smoothing, P1310