How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic Review

被引:0
|
作者
Dhamija, Lovi [1 ]
Bansal, Urvashi [1 ]
机构
[1] Dr BR Ambedkar Natl Inst Technol, Jalandhar, Punjab, India
关键词
Deep learning; Adversarial attacks; Generative adversarial networks; Robustness; Transferability; Generalizability; NEURAL-NETWORKS; ROBUSTNESS; EXAMPLES;
D O I
10.1007/s00354-024-00283-0
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning plays a significant role in developing a robust and constructive framework for tackling complex learning tasks. Consequently, it is widely utilized in many security-critical contexts, such as Self-Driving and Biometric Systems. Due to their complex structure, Deep Neural Networks (DNN) are vulnerable to adversarial attacks. Adversaries can deploy attacks at training or testing time and can cause significant security risks in safety-critical applications. Therefore, it is essential to comprehend adversarial attacks, their crafting methods, and different defending strategies. Moreover, finding effective defenses to malicious attacks that can promote robustness and provide additional security in deep learning models is critical. Therefore, there is a need to analyze the different challenges concerning deep learning models' robustness. The proposed work aims to present a systematic review of primary studies that focuses on providing an efficient and robust framework against adversarial attacks. This work used a standard SLR (Systematic Literature Review) method to review the studies from different digital libraries. In the next step, this work designed and answered several research questions thoroughly. The study classified several defensive strategies and discussed the major conflicting factors that can enhance robustness and efficiency. Moreover, the impact of adversarial attacks and their perturbation metrics are also analyzed for different defensive approaches. The findings of this study assist researchers and practitioners in choosing an appropriate defensive strategy by incorporating the considerations of varying research issues and recommendations. Finally, relying upon reviewed studies, this work found future directions for researchers to design robust and innovative solutions against adversarial attacks.
引用
收藏
页码:1165 / 1235
页数:71
相关论文
共 50 条
  • [1] Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey
    Ding, Jia
    Xu, Zhiwu
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT III, 2020, 12454 : 396 - 408
  • [2] SIT: Stochastic Input Transformation to Defend Against Adversarial Attacks on Deep Neural Networks
    Guesmi, Amira
    Alouani, Ihsen
    Baklouti, Mouna
    Frikha, Tarek
    Abid, Mohamed
    IEEE DESIGN & TEST, 2022, 39 (03) : 63 - 72
  • [3] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [4] Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
    Akhtar, Naveed
    Mian, Ajmal
    IEEE ACCESS, 2018, 6 : 14410 - 14430
  • [5] Adversarial Learning Targeting Deep Neural Network Classification: A Comprehensive Review of Defenses Against Attacks
    Miller, David J.
    Xiang, Zhen
    Kesidis, George
    PROCEEDINGS OF THE IEEE, 2020, 108 (03) : 402 - 433
  • [6] Adversarial Training Methods for Deep Learning: A Systematic Review
    Zhao, Weimin
    Alwidian, Sanaa
    Mahmoud, Qusay H.
    ALGORITHMS, 2022, 15 (08)
  • [7] Adversarial Attacks and Countermeasures on Image Classification-based Deep Learning Models in Autonomous Driving Systems: A Systematic Review
    Badjie, Bakary
    Cecilio, Jose
    Casimiro, Antonio
    ACM COMPUTING SURVEYS, 2025, 57 (01)
  • [8] Evaluating Pretrained Deep Learning Models for Image Classification Against Individual and Ensemble Adversarial Attacks
    Rahman, Mafizur
    Roy, Prosenjit
    Frizell, Sherri S.
    Qian, Lijun
    IEEE ACCESS, 2025, 13 : 35230 - 35242
  • [9] Enhancing trustworthy deep learning for image classification against evasion attacks: a systematic literature review
    Akhtom, Dua'a Mkhiemir
    Singh, Manmeet Mahinderjit
    Xinying, Chew
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (07)
  • [10] Systematic Review of Emotion Detection with Computer Vision and Deep Learning
    Pereira, Rafael
    Mendes, Carla
    Ribeiro, Jose
    Ribeiro, Roberto
    Miragaia, Rolando
    Rodrigues, Nuno
    Costa, Nuno
    Pereira, Antonio
    SENSORS, 2024, 24 (11)