Defense Against Adversarial Attacks by Reconstructing Images

被引:30
|
作者
Zhang, Shudong [1 ]
Gao, Haichang [1 ]
Rao, Qingxun [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Perturbation methods; Image reconstruction; Training; Iterative methods; Computational modeling; Predictive models; Transform coding; CNN; adversarial examples; adversarial attacks; defend; residual block; reconstruction network; perceptual loss;
D O I
10.1109/TIP.2021.3092582
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) are vulnerable to being deceived by adversarial examples generated by adding small, human-imperceptible perturbations to a clean image. In this paper, we propose an image reconstruction network that reconstructs an input adversarial example into a clean output image to defend against such adversarial attacks. Due to the powerful learning capabilities of the residual block structure, our model can learn a precise mapping from adversarial examples to reconstructed examples. The use of a perceptual loss greatly suppresses the error amplification effect and improves the performance of our reconstruction network. In addition, by adding randomization layers to the end of the network, the effects of additional noise are further suppressed, especially for iterative attacks. Our model has the following four advantages. 1) It greatly reduces the impact of adversarial perturbations while having little influence on the prediction performance of clean images. 2) During inference phase, it performs better than most existing model-agnostic defense methods. 3) It has better generalization capability. 4) It can be flexibly combined with other methods, such as adversarially trained models.
引用
收藏
页码:6117 / 6129
页数:13
相关论文
共 50 条
  • [1] Image Super-Resolution as a Defense Against Adversarial Attacks
    Mustafa, Aamir
    Khan, Salman H.
    Hayat, Munawar
    Shen, Jianbing
    Shao, Ling
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 1711 - 1724
  • [2] Efficient Defense Against Adversarial Attacks on Multimodal Emotion AI Models
    Cho, Hsin-Hung
    Zeng, Jiang-Yi
    Tsai, Min-Yan
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2025,
  • [3] Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
    Kim, Yebon
    Jung, Jinhyo
    Kim, Hyunjun
    So, Hwisoo
    Ko, Yohan
    Shrivastava, Aviral
    Lee, Kyoungwoo
    Hwang, Uiwon
    IEEE ACCESS, 2024, 12 : 176485 - 176497
  • [4] Cyclic Defense GAN Against Speech Adversarial Attacks
    Esmaeilpour, Mohammad
    Cardinal, Patrick
    Koerich, Alessandro Lameiras
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1769 - 1773
  • [5] DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder
    Bakhti, Yassine
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    IEEE ACCESS, 2019, 7 : 160397 - 160407
  • [6] A Defense Method Against Facial Adversarial Attacks
    Sadu, Chiranjeevi
    Das, Pradip K.
    2021 IEEE REGION 10 CONFERENCE (TENCON 2021), 2021, : 459 - 463
  • [7] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [8] Defense against adversarial attacks using DRAGAN
    ArjomandBigdeli, Ali
    Amirmazlaghani, Maryam
    Khalooei, Mohammad
    2020 6TH IRANIAN CONFERENCE ON SIGNAL PROCESSING AND INTELLIGENT SYSTEMS (ICSPIS), 2020,
  • [9] Deep Image Restoration Model: A Defense Method Against Adversarial Attacks
    Ali, Kazim
    Quershi, Adnan N.
    Bin Arifin, Ahmad Alauddin
    Bhatti, Muhammad Shahid
    Sohail, Abid
    Hassan, Rohail
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (02): : 2209 - 2224
  • [10] Adaptive Image Reconstruction for Defense Against Adversarial Attacks
    Yang, Yanan
    Shih, Frank Y.
    Chang, I-Cheng
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (12)