Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples

被引:9
作者
Liu, Guanxiong [1 ]
Khalil, Issa [2 ]
Khreishah, Abdallah [1 ]
机构
[1] New Jersey Inst Technol, Newark, NJ 07102 USA
[2] Qatar Comp Res Inst, Doha, Qatar
来源
PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY (CODASPY '21) | 2021年
关键词
adversarial machine learning; adversarial training;
D O I
10.1145/3422337.3447841
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples are among the biggest challenges for machine learning models, especially neural network classifiers. Adversarial examples are inputs manipulated with perturbations insignificant to humans while being able to fool machine learning models. Researchers achieve great progress in utilizing adversarial training as a defense. However, the overwhelming computational cost degrades its applicability, and little has been done to overcome this issue. Single-Step adversarial training methods have been proposed as computationally viable solutions; however, they still fail to defend against iterative adversarial examples. In this work, we first experimentally analyze several different state-of-the-art (SOTA) defenses against adversarial examples. Then, based on observations from experiments, we propose a novel single-step adversarial training method that can defend against both single-step and iterative adversarial examples. Through extensive evaluations, we demonstrate that our proposed method successfully combines the advantages of both single-step (low training overhead) and iterative (high robustness) adversarial training defenses. Compared with ATDA on the CIFAR-10 dataset, for example, our proposed method achieves a 35.67% enhancement in test accuracy and a 19.14% reduction in training time. When compared with methods that use BIM or Madry examples (iterative methods) on the CIFAR-10 dataset, our proposed method saves up to 76.03% in training time, with less than 3.78% degeneration in test accuracy. Finally, our experiments with the ImageNet dataset clearly show the scalability of our approach and its performance advantages over SOTA single-step approaches.
引用
收藏
页码:17 / 27
页数:11
相关论文
共 24 条
[11]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324
[12]  
Liu GX, 2019, Arxiv, DOI arXiv:1904.08516
[13]  
Madry A, 2019, Arxiv, DOI [arXiv:1706.06083, 10.48550/arXiv.1706.06083]
[14]   MagNet: a Two-Pronged Defense against Adversarial Examples [J].
Meng, Dongyu ;
Chen, Hao .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :135-147
[15]  
Papernot N, 2018, Arxiv, DOI arXiv:1610.00768
[16]   Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks [J].
Papernot, Nicolas ;
McDaniel, Patrick ;
Wu, Xi ;
Jha, Somesh ;
Swami, Ananthram .
2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, :582-597
[17]  
Samangouei P, 2018, Arxiv, DOI arXiv:1805.06605
[18]  
Schott L., 2018, Towards the first adversarially robust neural network model on mnist
[19]  
Shafahi A, 2019, Arxiv, DOI [arXiv:1904.12843, 10.48550/arXiv.1904.12843]
[20]  
Song CB, 2019, Arxiv, DOI arXiv:1810.00740