Evaluating Impact of Image Transformations on Adversarial Examples

被引:1
作者
Tian, Pu [1 ]
Poreddy, Sathvik [2 ]
Danda, Charitha [2 ]
Gowrineni, Chihnita [2 ]
Wu, Yalong [2 ]
Liao, Weixian [3 ]
机构
[1] Stockton Univ, Comp Sci Program, Galloway, NJ 08205 USA
[2] Univ Houston Clear Lake, Dept Comp Sci, Houston, TX 77058 USA
[3] Towson Univ, Dept Comp & Informat Sci, Towson, MD 21252 USA
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Training; Residual neural networks; Computational modeling; Whales; Perturbation methods; Iron; Deep learning; Data models; Robustness; Predictive models; AI security; adversarial attacks; image transformations; deep learning robustness; ATTACKS;
D O I
10.1109/ACCESS.2024.3487479
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning has revolutionized image recognition. One significant obstacle still remains, the vulnerability of these models to adversarial attacks. These attacks manipulate images with subtle changes that cause CNN misclassification. While methods, such as adversarial training, have been proposed to defend against adversarial attacks, they incur additional training costs, either extra input samples or auxiliary models. In this work, we propose an efficient approach to deploying robust models that utilize image transformations to remove adversarial noises. We investigate the performance of simple transformations and report several effective ones, including Affine blur, Gaussian blur, Median blur, and Bilateral blur against various adversarial attack methods, such as Fast Gradient Sign Method (FGSM), Randomized + FGSM (RFGSM), and Projected Gradient Descent (PGD). We apply these image transformation techniques to the widely used ImageNet dataset, and experimental results demonstrate the potential of image transformation methods as a strong defense against adversarial attacks in deep learning-based image classification systems, especially when combined with cutting-edge neural network architectures such as ResNet50 and DenseNet121. Our comprehensive results show that these transformations can significantly improve the robustness of CNN models against adversarial attacks on ImageNet, achieving a recovery rate of up to 85% to 90% without incurring extra resource costs.
引用
收藏
页码:186217 / 186228
页数:12
相关论文
共 44 条
  • [1] Defense against Universal Adversarial Perturbations
    Akhtar, Naveed
    Liu, Jian
    Mian, Ajmal
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3389 - 3398
  • [2] Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
    Akhtar, Naveed
    Mian, Ajmal
    [J]. IEEE ACCESS, 2018, 6 : 14410 - 14430
  • [3] Milton MAA, 2018, Arxiv, DOI arXiv:1806.08970
  • [4] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [5] Chaochao Li, 2020, Proceedings of 2020 IEEE International Conference on Progress in Informatics and Computing (PIC), P173, DOI 10.1109/PIC50277.2020.9350850
  • [6] Scaling up the Randomized Gradient-Free Adversarial Attack Reveals Overestimation of Robustness Using Established Attacks
    Croce, Francesco
    Rauber, Jonas
    Hein, Matthias
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (04) : 1028 - 1046
  • [7] A method for the estimation and recovering from general affine transforms in digital watermarking applications
    Deguillaume, F
    Voloshynovskiy, S
    Pun, T
    [J]. SECURITY AND WATERMARKING OF MULTIMEDIA CONTENTS IV, 2002, 4675 : 313 - 322
  • [8] Gedraite ES, 2011, ELMAR PROC, P393
  • [9] Guo C., 2017, arXiv
  • [10] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778