Defense Against Adversarial Attacks by Reconstructing Images

被引:30
|
作者
Zhang, Shudong [1 ]
Gao, Haichang [1 ]
Rao, Qingxun [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Perturbation methods; Image reconstruction; Training; Iterative methods; Computational modeling; Predictive models; Transform coding; CNN; adversarial examples; adversarial attacks; defend; residual block; reconstruction network; perceptual loss;
D O I
10.1109/TIP.2021.3092582
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) are vulnerable to being deceived by adversarial examples generated by adding small, human-imperceptible perturbations to a clean image. In this paper, we propose an image reconstruction network that reconstructs an input adversarial example into a clean output image to defend against such adversarial attacks. Due to the powerful learning capabilities of the residual block structure, our model can learn a precise mapping from adversarial examples to reconstructed examples. The use of a perceptual loss greatly suppresses the error amplification effect and improves the performance of our reconstruction network. In addition, by adding randomization layers to the end of the network, the effects of additional noise are further suppressed, especially for iterative attacks. Our model has the following four advantages. 1) It greatly reduces the impact of adversarial perturbations while having little influence on the prediction performance of clean images. 2) During inference phase, it performs better than most existing model-agnostic defense methods. 3) It has better generalization capability. 4) It can be flexibly combined with other methods, such as adversarially trained models.
引用
收藏
页码:6117 / 6129
页数:13
相关论文
共 50 条
  • [21] Defense-VAE: A Fast and Accurate Defense Against Adversarial Attacks
    Li, Xiang
    Ji, Shihao
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 191 - 207
  • [22] Detection defense against adversarial attacks with saliency map
    Ye, Dengpan
    Chen, Chuanxi
    Liu, Changrui
    Wang, Hao
    Jiang, Shunzhi
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 10193 - 10210
  • [23] Symmetry Defense Against CNN Adversarial Perturbation Attacks
    Lindqvist, Blerta
    INFORMATION SECURITY, ISC 2023, 2023, 14411 : 142 - 160
  • [24] Universal Inverse Perturbation Defense Against Adversarial Attacks
    Chen J.-Y.
    Wu C.-A.
    Zheng H.-B.
    Wang W.
    Wen H.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (10): : 2172 - 2187
  • [25] DEFENSE AGAINST ADVERSARIAL ATTACKS ON SPOOFING COUNTERMEASURES OF ASV
    Wu, Haibin
    Liu, Songxiang
    Meng, Helen
    Lee, Hung-yi
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6564 - 6568
  • [26] Defense Against Adversarial Attacks Based on Stochastic Descent Sign Activation Networks on Medical Images
    Yang, Yanan
    Shih, Frank Y.
    Roshan, Usman
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (03)
  • [27] Defense against adversarial attacks in traffic sign images identification based on 5G
    Fei Wu
    Limin Xiao
    Wenxue Yang
    Jinbin Zhu
    EURASIP Journal on Wireless Communications and Networking, 2020
  • [28] Defense against adversarial attacks in traffic sign images identification based on 5G
    Wu, Fei
    Xiao, Limin
    Yang, Wenxue
    Zhu, Jinbin
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)
  • [29] AGS: Attribution Guided Sharpening as a Defense Against Adversarial Attacks
    Tobia, Javier Perez
    Braun, Phillip
    Narayan, Apurva
    ADVANCES IN INTELLIGENT DATA ANALYSIS XX, IDA 2022, 2022, 13205 : 225 - 236
  • [30] Defense-PointNet: Protecting PointNet Against Adversarial Attacks
    Zhang, Yu
    Liang, Gongbo
    Salem, Tawfiq
    Jacobs, Nathan
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 5654 - 5660