Attack-less adversarial training for a robust adversarial defense

被引:5
作者
Ho, Jiacang [1 ]
Lee, Byung-Gook [1 ]
Kang, Dae-Ki [1 ]
机构
[1] Dongseo Univ, Dept Comp Engn, 47 Jurye Ro, Busan 47011, South Korea
关键词
Adversarial machine learning; Adversarial training; Defense technique; Pixel regeneration; NEURAL-NETWORKS;
D O I
10.1007/s10489-021-02523-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial examples have proved efficacious in fooling deep neural networks recently. Many researchers have studied this issue of adversarial examples by evaluating neural networks against their attack techniques and increasing the robustness of neural networks with their defense techniques. To the best of our knowledge, adversarial training is one of the most effective defense techniques against the adversarial examples. However, the method is not able to cope with new attacks because it requires attack techniques in the training phase. In this paper, we propose a novel defense technique, Attack-Less Adversarial Training (ALAT) method, which is independent from any attack techniques, thereby is useful in preventing future attacks. Specifically, ALAT regenerates every pixel of an image into different pixel value, which commonly eliminates the majority of the adversarial noises in the adversarial example. This pixel regeneration is useful in defense because the adversarial noises are the core problem that make the neural networks produce high misclassification rate. Our experiment results with several benchmark datasets show that our method not only relieves over-fitting issue during the training of neural networks with a large number of epochs, but also boosts the robustness of the neural network.
引用
收藏
页码:4364 / 4381
页数:18
相关论文
共 41 条
[1]   Regularization neural network for construction cost estimation [J].
Adeli, H ;
Wu, MY .
JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT, 1998, 124 (01) :18-24
[2]  
Agarwal A, 2021, PATT RECOGN LETT
[3]  
[Anonymous], 2010, Cifar-10 (canadian institute for advanced research)
[4]  
[Anonymous], ARXIV181109600
[5]  
Athalye A, 2018, PR MACH LEARN RES, V80
[6]   Wild patterns: Ten years after the rise of adversarial machine learning [J].
Biggio, Battista ;
Roli, Fabio .
PATTERN RECOGNITION, 2018, 84 :317-331
[7]  
Brown T. B., 2017, Adversarial Patch
[8]  
Carlini N., 2017, P 10 ACM WORKSH ART, P3, DOI DOI 10.1145/3128572.3140444
[9]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[10]  
Dhillon G. S., 2018, ICLR