Defense Against Adversarial Attacks by Reconstructing Images

被引:37
作者
Zhang, Shudong [1 ]
Gao, Haichang [1 ]
Rao, Qingxun [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Perturbation methods; Image reconstruction; Training; Iterative methods; Computational modeling; Predictive models; Transform coding; CNN; adversarial examples; adversarial attacks; defend; residual block; reconstruction network; perceptual loss;
D O I
10.1109/TIP.2021.3092582
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) are vulnerable to being deceived by adversarial examples generated by adding small, human-imperceptible perturbations to a clean image. In this paper, we propose an image reconstruction network that reconstructs an input adversarial example into a clean output image to defend against such adversarial attacks. Due to the powerful learning capabilities of the residual block structure, our model can learn a precise mapping from adversarial examples to reconstructed examples. The use of a perceptual loss greatly suppresses the error amplification effect and improves the performance of our reconstruction network. In addition, by adding randomization layers to the end of the network, the effects of additional noise are further suppressed, especially for iterative attacks. Our model has the following four advantages. 1) It greatly reduces the impact of adversarial perturbations while having little influence on the prediction performance of clean images. 2) During inference phase, it performs better than most existing model-agnostic defense methods. 3) It has better generalization capability. 4) It can be flexibly combined with other methods, such as adversarially trained models.
引用
收藏
页码:6117 / 6129
页数:13
相关论文
共 53 条
[1]  
[Anonymous], ICML
[2]  
[Anonymous], 2018, INT C LEARNING REPRE
[3]  
[Anonymous], 2017, INT C LEARN REPR
[4]  
[Anonymous], 2016, TECHNICAL REPORT CLE
[5]  
Athalye Anish, 2018, P MACHINE LEARNING R, V80
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[8]  
Croce F., 2020, ARXIV200301690
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]   Quadruplet Network With One-Shot Learning for Fast Visual Object Tracking [J].
Dong, Xingping ;
Shen, Jianbing ;
Wu, Dongming ;
Guo, Kan ;
Jin, Xiaogang ;
Porikli, Fatih .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (07) :3516-3527