Improving Adversarial Robustness via Attention and Adversarial Logit Pairing

被引:1
作者
Li, Xingjian [1 ]
Goodman, Dou [2 ]
Liu, Ji [1 ]
Wei, Tao [2 ]
Dou, Dejing [1 ]
机构
[1] Baidu Res, Big Data Lab, Beijing, Peoples R China
[2] Baidu Inc, X Lab, Beijing, Peoples R China
来源
FRONTIERS IN ARTIFICIAL INTELLIGENCE | 2022年 / 4卷
关键词
adversarial training; attention; adversarial robustness; adversarial example; deep learning; deep neural network;
D O I
10.3389/frai.2021.752831
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. In this paper, we develop improved techniques for defending against adversarial examples. First, we propose an enhanced defense technique denoted Attention and Adversarial Logit Pairing (AT + ALP), which encourages both attention map and logit for the pairs of examples to be similar. When being applied to clean examples and their adversarial counterparts, AT + ALP improves accuracy on adversarial examples over adversarial training. We show that AT + ALP can effectively increase the average activations of adversarial examples in the key area and demonstrate that it focuses on discriminate features to improve the robustness of the model. Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT + ALP achieves the state of the art defense performance. For example, on 17 Flower Category Database, under strong 200-iteration Projected Gradient Descent (PGD) gray-box and black-box attacks where prior art has 34 and 39% accuracy, our method achieves 50 and 51%. Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation epsilon is an element of {0.25, 0.5} i.e. L-infinity is an element of {0.25, 0.5} with 10-200 attack iterations. To the best of our knowledge, such a strong attack has not been previously explored on a wide range of datasets.
引用
收藏
页数:9
相关论文
共 33 条
[1]  
Araujo A, 2020, Arxiv, DOI arXiv:1903.10219
[2]  
Athalye A, 2018, PR MACH LEARN RES, V80
[3]  
Bose AJ, 2018, IEEE INT WORKSH MULT
[4]  
Buckman JacobAurko Roy., 2018, Thermometer encoding: One hot way to resist adversarial examples
[5]  
Carlini N, 2017, Arxiv, DOI [arXiv:1608.04644, DOI 10.48550/ARXIV.1608.04644]
[6]   RayS: A Ray Searching Method for Hard-label Adversarial Attack [J].
Chen, Jinghui ;
Gu, Quanquan .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :1739-1747
[7]  
Croce F, 2020, PR MACH LEARN RES, V119
[8]  
Dhillon GuneetS., 2018, arXiv, DOI DOI 10.48550/ARXIV.1803.01442
[9]  
Goodfellow IanJ., 2015, CORR ABS14126572
[10]  
Guo C., 2017, COUNTERING ADVERSARI