Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks

被引:71
作者
Wang, Jianyu [1 ]
Zhang, Haichao [1 ]
机构
[1] Baidu Res USA, Sunnyvale, CA 94089 USA
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) | 2019年
关键词
D O I
10.1109/ICCV.2019.00673
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we study fast training of adversarially robust models. From the analyses of the state-of-the-art defense method, i.e., the multi-step adversarial training [34], we hypothesize that the gradient magnitude links to the model robustness. Motivated by this, we propose to perturb both the image and the label during training, which we call Bilateral Adversarial Training (BAT). To generate the adversarial label, we derive an closed-form heuristic solution. To generate the adversarial image, we use one-step targeted attack with the target label being the most confusing class. In the experiment, we first show that random start and the most confusing target attack effectively prevent the label leaking and gradient masking problem. Then coupled with the adversarial label part, our model significantly improves the state-of-the-art results. For example, against PGD100 white-box attack with cross-entropy loss, on CIFAR10, we achieve 63.7% versus 47.2%; on SVHN, we achieve 59.1% versus 42.1%. At last, the experiment on the very (computationally) challenging ImageNet dataset further demonstrates the effectiveness of our fast method.
引用
收藏
页码:6628 / 6637
页数:10
相关论文
共 55 条
[1]  
[Anonymous], 2016, CoRR abs/1512.00567, DOI DOI 10.1109/CVPR.2016.308
[2]  
Athalye A, 2018, PR MACH LEARN RES, V80
[3]  
Biggio B., 2013, MACHINE LEARNING KNO, P387, DOI [DOI 10.1007/978-3-642-40994, DOI 10.1007/978-3-642-40994-3_25]
[4]   Wild patterns: Ten years after the rise of adversarial machine learning [J].
Biggio, Battista ;
Roli, Fabio .
PATTERN RECOGNITION, 2018, 84 :317-331
[5]  
Brendel W., 2018, ICLR, P1
[6]  
Carlini N., 2017, ACM WORKSH ART INT S, P3
[7]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[8]  
Chen H., 2018, ACL
[9]  
Chen P.Y., 2017, P 10 ACM WORKSH ART, P15
[10]  
Chen PY, 2018, AAAI CONF ARTIF INTE, P10