Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW

被引:2
|
作者
Villegas-Ch, William [1 ]
Jaramillo-Alcazar, Angel [1 ]
Lujan-Mora, Sergio [2 ]
机构
[1] Univ Las Amer, Escuela Ingn Cibersegur, Fac Ingn Ciencias Aplicadas, Quito 170125, Ecuador
[2] Univ Alicante, Dept Lenguajes & Sistemas Informat, Alicante 03690, Spain
关键词
adversary examples; robustness of models; countermeasures; NEURAL-NETWORKS;
D O I
10.3390/bdcc8010008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and analyze their impact on the model's classification accuracy. Additionally, image manipulation techniques were investigated as defensive measures against adversarial attacks. The results highlighted the model's vulnerability to conflicting examples: the Fast Gradient Signed Method effectively altered the original classifications, while the Carlini and Wagner method proved less effective. Promising approaches such as noise reduction, image compression, and Gaussian blurring were presented as effective countermeasures. These findings underscore the importance of addressing the vulnerability of machine learning models and the need to develop robust defenses against adversarial examples. This article emphasizes the urgency of addressing the threat posed by harmful standards in machine learning models, highlighting the relevance of implementing effective countermeasures and image manipulation techniques to mitigate the effects of adversarial attacks. These efforts are crucial to safeguarding model integrity and trust in an environment marked by constantly evolving hostile threats. An average 25% decrease in accuracy was observed for the VGG16 model when exposed to the Fast Gradient Signed Method and Projected Gradient Descent attacks, and an even more significant 35% decrease with the Carlini and Wagner method.
引用
收藏
页数:23
相关论文
共 50 条
  • [21] Adversarial attacks and defenses in deep learning for image recognition: A survey
    Wang, Jia
    Wang, Chengyu
    Lin, Qiuzhen
    Luo, Chengwen
    Wu, Chao
    Li, Jianqiang
    NEUROCOMPUTING, 2022, 514 : 162 - 181
  • [22] Multi-Agent Guided Deep Reinforcement Learning Approach Against State Perturbed Adversarial Attacks
    Cerci, Cagri
    Temeltas, Hakan
    IEEE ACCESS, 2024, 12 : 156146 - 156159
  • [23] Techniques Improving the Robustness of Deep Learning Models for Industrial Sound Analysis
    Johnson, David S.
    Grollmisch, Sascha
    28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 81 - 85
  • [24] Analyzing Robustness of Automatic Scientific Claim Verification Tools against Adversarial Rephrasing Attacks
    Layne, Janet
    Ratul, Qudrat e. alahy
    Serra, Edoardo
    Jajodia, Sushil
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)
  • [25] Robustness of deep learning models on graphs: A survey
    Xu, Jiarong
    Chen, Junru
    You, Siqi
    Xiao, Zhiqing
    Yang, Yang
    Lu, Jiangang
    AI OPEN, 2021, 2 : 69 - 78
  • [26] Image Transformation-Based Defense Against Adversarial Perturbation on Deep Learning Models
    Agarwal, Akshay
    Singh, Richa
    Vatsa, Mayank
    Ratha, Nalini K.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (05) : 2106 - 2121
  • [27] Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
    Mumuni, Fuseini
    Mumuni, Alhassan
    COGNITIVE SYSTEMS RESEARCH, 2024, 84
  • [28] Adversarial Attacks on Deep-Learning Based Radio Signal Classification
    Sadeghi, Meysam
    Larsson, Erik G.
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2019, 8 (01) : 213 - 216
  • [29] DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder
    Bakhti, Yassine
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    IEEE ACCESS, 2019, 7 : 160397 - 160407
  • [30] Optimism in the Face of Adversity: Understanding and Improving Deep Learning Through Adversarial Robustness
    Ortiz-Jimenez, Guillermo
    Modas, Apostolos
    Moosavi-Dezfooli, Seyed-Mohsen
    Frossard, Pascal
    PROCEEDINGS OF THE IEEE, 2021, 109 (05) : 635 - 659