Unmasking the Vulnerabilities of Deep Learning Models: A Multi-Dimensional Analysis of Adversarial Attacks and Defenses

被引:0
作者
Juraev, Firuz [1 ]
Abuhamad, Mohammed [2 ]
Chan-Tin, Eric [2 ]
Thiruvathukal, George K. [2 ]
Abuhmed, Tamer [1 ]
机构
[1] Sungkyunkwan Univ, Dept Comp Sci & Engn, Suwon, South Korea
[2] Loyola Univ, Dept Comp Sci, Chicago, IL USA
来源
2024 SILICON VALLEY CYBERSECURITY CONFERENCE, SVCC 2024 | 2024年
基金
新加坡国家研究基金会;
关键词
Threat Analysis; Deep Learning; Black-box Attacks; Adversarial Perturbations; Defensive Techniques;
D O I
10.1109/SVCC61185.2024.10637364
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Learning (DL) is rapidly maturing to the point that it can be used in safety- and security-crucial applications, such as self-driving vehicles, surveillance, drones, and robots. However, adversarial samples, which are undetectable to the human eye, pose a serious threat that can cause the model to misbehave and compromise the performance of such applications. Addressing the robustness of DL models has become crucial to understanding and defending against adversarial attacks. In this study, we perform comprehensive experiments to examine the effect of adversarial attacks and defenses on various model architectures across well-known datasets. Our research focuses on black-box attacks such as SimBA, HopSkipJump, MGAAttack, and boundary attacks, as well as preprocessor-based defensive mechanisms, including bits squeezing, median smoothing, and JPEG filter. Experimenting with various models, our results demonstrate that the level of noise needed for the attack increases as the number of layers increases. Moreover, the attack success rate decreases as the number of layers increases. This indicates that model complexity and robustness have a significant relationship. Investigating the diversity and robustness relationship, our experiments with diverse models show that having a large number of parameters does not imply higher robustness. Our experiments extend to show the effects of the training dataset on model robustness. Using various datasets such as ImageNet-1000, CIFAR-100, and CIFAR-10 are used to evaluate the black-box attacks. Considering the multiple dimensions of our analysis, e.g., model complexity and training dataset, we examined the behavior of black-box attacks when models apply defenses. Our results show that applying defense strategies can significantly reduce attack effectiveness. This research provides in-depth analysis and insight into the robustness of DL models against various attacks, and defenses.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Understanding adversarial attacks on observations in deep reinforcement learning
    You, Qiaoben
    Ying, Chengyang
    Zhou, Xinning
    Su, Hang
    Zhu, Jun
    Zhang, Bo
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (05)
  • [32] Threat of Adversarial Attacks within Deep Learning: Survey
    Ata-Us-samad
    Singh R.
    [J]. Recent Advances in Computer Science and Communications, 2023, 16 (07)
  • [33] Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
    Pravin, Chandresh
    Martino, Ivan
    Nicosia, Giuseppe
    Ojha, Varun
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 16 - 28
  • [34] Deep Learning Defense Method Against Adversarial Attacks
    Wang, Ling
    Zhang, Cheng
    Liu, Jie
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3667 - 3671
  • [35] Robustness and Security in Deep Learning: Adversarial Attacks and Countermeasures
    Kaur, Navjot
    Singh, Someet
    Deore, Shailesh Shivaji
    Vidhate, Deepak A.
    Haridas, Divya
    Kosuri, Gopala Varma
    Kolhe, Mohini Ravindra
    [J]. JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (03) : 1250 - 1257
  • [36] Multi-Dimensional Underwater Point Cloud Detection Based on Deep Learning
    Tsai, Chia-Ming
    Lai, Yi-Horng
    Sun, Yung-Da
    Chung, Yu-Jen
    Perng, Jau-Woei
    [J]. SENSORS, 2021, 21 (03) : 1 - 18
  • [37] Hardening Interpretable Deep Learning Systems: Investigating Adversarial Threats and Defenses
    Abdukhamidov, Eldor
    Abuhamad, Mohammed
    Woo, Simon S.
    Chan-Tin, Eric
    Abuhmed, Tamer
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 3963 - 3976
  • [38] A Framework for Robust Deep Learning Models Against Adversarial Attacks Based on a Protection Layer Approach
    Al-Andoli, Mohammed Nasser
    Tan, Shing Chiang
    Sim, Kok Swee
    Goh, Pey Yun
    Lim, Chee Peng
    [J]. IEEE ACCESS, 2024, 12 : 17522 - 17540
  • [39] Evaluating Pretrained Deep Learning Models for Image Classification Against Individual and Ensemble Adversarial Attacks
    Rahman, Mafizur
    Roy, Prosenjit
    Frizell, Sherri S.
    Qian, Lijun
    [J]. IEEE ACCESS, 2025, 13 : 35230 - 35242
  • [40] Adversarial attacks and adversarial training for burn image segmentation based on deep learning
    Chen, Luying
    Liang, Jiakai
    Wang, Chao
    Yue, Keqiang
    Li, Wenjun
    Fu, Zhihui
    [J]. MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, 62 (09) : 2717 - 2735