Unmasking the Vulnerabilities of Deep Learning Models: A Multi-Dimensional Analysis of Adversarial Attacks and Defenses

被引:0
作者
Juraev, Firuz [1 ]
Abuhamad, Mohammed [2 ]
Chan-Tin, Eric [2 ]
Thiruvathukal, George K. [2 ]
Abuhmed, Tamer [1 ]
机构
[1] Sungkyunkwan Univ, Dept Comp Sci & Engn, Suwon, South Korea
[2] Loyola Univ, Dept Comp Sci, Chicago, IL USA
来源
2024 SILICON VALLEY CYBERSECURITY CONFERENCE, SVCC 2024 | 2024年
基金
新加坡国家研究基金会;
关键词
Threat Analysis; Deep Learning; Black-box Attacks; Adversarial Perturbations; Defensive Techniques;
D O I
10.1109/SVCC61185.2024.10637364
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Learning (DL) is rapidly maturing to the point that it can be used in safety- and security-crucial applications, such as self-driving vehicles, surveillance, drones, and robots. However, adversarial samples, which are undetectable to the human eye, pose a serious threat that can cause the model to misbehave and compromise the performance of such applications. Addressing the robustness of DL models has become crucial to understanding and defending against adversarial attacks. In this study, we perform comprehensive experiments to examine the effect of adversarial attacks and defenses on various model architectures across well-known datasets. Our research focuses on black-box attacks such as SimBA, HopSkipJump, MGAAttack, and boundary attacks, as well as preprocessor-based defensive mechanisms, including bits squeezing, median smoothing, and JPEG filter. Experimenting with various models, our results demonstrate that the level of noise needed for the attack increases as the number of layers increases. Moreover, the attack success rate decreases as the number of layers increases. This indicates that model complexity and robustness have a significant relationship. Investigating the diversity and robustness relationship, our experiments with diverse models show that having a large number of parameters does not imply higher robustness. Our experiments extend to show the effects of the training dataset on model robustness. Using various datasets such as ImageNet-1000, CIFAR-100, and CIFAR-10 are used to evaluate the black-box attacks. Considering the multiple dimensions of our analysis, e.g., model complexity and training dataset, we examined the behavior of black-box attacks when models apply defenses. Our results show that applying defense strategies can significantly reduce attack effectiveness. This research provides in-depth analysis and insight into the robustness of DL models against various attacks, and defenses.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [22] A Survey on Deep Learning for Website Fingerprinting Attacks and Defenses
    Liu, Peidong
    He, Longtao
    Li, Zhoujun
    IEEE ACCESS, 2023, 11 : 26033 - 26047
  • [23] Assessing Vulnerabilities of Deep Learning Explainability in Medical Image Analysis Under Adversarial Settings
    de Aguiar, Erikson J.
    Costa, Marcus V. L.
    Traina-, Caetano, Jr.
    Traina, Agora J. M.
    2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS, 2023, : 13 - 16
  • [24] Adversarial Attacks on Deep Models for Financial Transaction Records
    Fursov, Ivan
    Morozov, Matvey
    Kaploukhaya, Nina
    Kovtun, Elizaveta
    Rivera-Castro, Rodrigo
    Gusev, Gleb
    Babaev, Dmitry
    Kireev, Ivan
    Zaytsev, Alexey
    Burnaev, Evgeny
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2868 - 2878
  • [25] On Model Outsourcing Adaptive Attacks to Deep Learning Backdoor Defenses
    Peng, Huaibing
    Qiu, Huming
    Ma, Hua
    Wang, Shuo
    Fu, Anmin
    Al-Sarawi, Said F.
    Abbott, Derek
    Gao, Yansong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2356 - 2369
  • [26] Deep learning model inversion attacks and defenses: a comprehensive survey
    Wencheng Yang
    Song Wang
    Di Wu
    Taotao Cai
    Yanming Zhu
    Shicheng Wei
    Yiying Zhang
    Xu Yang
    Zhaohui Tang
    Yan Li
    Artificial Intelligence Review, 58 (8)
  • [27] Understanding adversarial attacks on deep learning based medical image analysis systems
    Ma, Xingjun
    Niu, Yuhao
    Gu, Lin
    Yisen, Wang
    Zhao, Yitian
    Bailey, James
    Lu, Feng
    PATTERN RECOGNITION, 2021, 110
  • [28] Threat of Adversarial Attacks within Deep Learning: Survey
    Ata-Us-samad
    Singh R.
    Recent Advances in Computer Science and Communications, 2023, 16 (07)
  • [29] Adversarial Learning Games with Deep Learning Models
    Chivukula, Aneesh Sreevallabh
    Liu, Wei
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 2758 - 2767
  • [30] Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
    Pravin, Chandresh
    Martino, Ivan
    Nicosia, Giuseppe
    Ojha, Varun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 16 - 28