Defense Against Adversarial Attacks by Reconstructing Images

被引:30
|
作者
Zhang, Shudong [1 ]
Gao, Haichang [1 ]
Rao, Qingxun [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Perturbation methods; Image reconstruction; Training; Iterative methods; Computational modeling; Predictive models; Transform coding; CNN; adversarial examples; adversarial attacks; defend; residual block; reconstruction network; perceptual loss;
D O I
10.1109/TIP.2021.3092582
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) are vulnerable to being deceived by adversarial examples generated by adding small, human-imperceptible perturbations to a clean image. In this paper, we propose an image reconstruction network that reconstructs an input adversarial example into a clean output image to defend against such adversarial attacks. Due to the powerful learning capabilities of the residual block structure, our model can learn a precise mapping from adversarial examples to reconstructed examples. The use of a perceptual loss greatly suppresses the error amplification effect and improves the performance of our reconstruction network. In addition, by adding randomization layers to the end of the network, the effects of additional noise are further suppressed, especially for iterative attacks. Our model has the following four advantages. 1) It greatly reduces the impact of adversarial perturbations while having little influence on the prediction performance of clean images. 2) During inference phase, it performs better than most existing model-agnostic defense methods. 3) It has better generalization capability. 4) It can be flexibly combined with other methods, such as adversarially trained models.
引用
收藏
页码:6117 / 6129
页数:13
相关论文
共 50 条
  • [41] Chained Dual-Generative Adversarial Network: A Generalized Defense Against Adversarial Attacks
    Singh, Amitoj Bir
    Awasthi, Lalit Kumar
    Urvashi, Abdulmajeed
    Shorfuzzaman, Mohammad
    Alsufyani, Abdulmajeed
    Uddin, Mueen
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 74 (02): : 2541 - 2555
  • [42] HFAD: Homomorphic Filtering Adversarial Defense Against Adversarial Attacks in Automatic Modulation Classification
    Zhang, Sicheng
    Lin, Yun
    Yu, Jiarun
    Zhang, Jianting
    Xuan, Qi
    Xu, Dongwei
    Wang, Juzhen
    Wang, Meiyu
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (03) : 880 - 892
  • [43] Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
    Kim, Yebon
    Jung, Jinhyo
    Kim, Hyunjun
    So, Hwisoo
    Ko, Yohan
    Shrivastava, Aviral
    Lee, Kyoungwoo
    Hwang, Uiwon
    IEEE ACCESS, 2024, 12 : 176485 - 176497
  • [44] Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
    Zhang, Haichao
    Wang, Jianyu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [45] Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder
    Platonov, V. V.
    Grigorjeva, N. M.
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2023, 57 (08) : 989 - 995
  • [46] Encoding Generative Adversarial Networks for Defense Against Image Classification Attacks
    Perez-Bravo, Jose M.
    Rodriguez-Rodriguez, Jose A.
    Garcia-Gonzalez, Jorge
    Molina-Cabello, Miguel A.
    Thurnhofer-Hemsi, Karl
    Lopez-Rubio, Ezequiel
    BIO-INSPIRED SYSTEMS AND APPLICATIONS: FROM ROBOTICS TO AMBIENT INTELLIGENCE, PT II, 2022, 13259 : 163 - 172
  • [47] PAD: Patch-Agnostic Defense against Adversarial Patch Attacks
    Jing, Lihua
    Wang, Rui
    Ren, Wenqi
    Dong, Xin
    Zou, Cong
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 24472 - 24481
  • [48] Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
    Min Ren
    Yun-Long Wang
    Zhao-Feng He
    Machine Intelligence Research, 2022, 19 (03) : 209 - 226
  • [49] Test Time Augmentation as a Defense Against Adversarial Attacks on Online Handwriting
    Yamashita, Yoh
    Iwana, Brian Kenji
    DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024, PT II, 2024, 14805 : 156 - 172
  • [50] FoolChecker: A platform to evaluate the robustness of images against adversarial attacks
    Liu Hui
    Zhao Bo
    Huang Linquan
    Guo Jiabao
    Liu Yifan
    NEUROCOMPUTING, 2020, 412 : 216 - 225