How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic Review

被引:0
|
作者
Dhamija, Lovi [1 ]
Bansal, Urvashi [1 ]
机构
[1] Dr BR Ambedkar Natl Inst Technol, Jalandhar, Punjab, India
关键词
Deep learning; Adversarial attacks; Generative adversarial networks; Robustness; Transferability; Generalizability; NEURAL-NETWORKS; ROBUSTNESS; EXAMPLES;
D O I
10.1007/s00354-024-00283-0
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning plays a significant role in developing a robust and constructive framework for tackling complex learning tasks. Consequently, it is widely utilized in many security-critical contexts, such as Self-Driving and Biometric Systems. Due to their complex structure, Deep Neural Networks (DNN) are vulnerable to adversarial attacks. Adversaries can deploy attacks at training or testing time and can cause significant security risks in safety-critical applications. Therefore, it is essential to comprehend adversarial attacks, their crafting methods, and different defending strategies. Moreover, finding effective defenses to malicious attacks that can promote robustness and provide additional security in deep learning models is critical. Therefore, there is a need to analyze the different challenges concerning deep learning models' robustness. The proposed work aims to present a systematic review of primary studies that focuses on providing an efficient and robust framework against adversarial attacks. This work used a standard SLR (Systematic Literature Review) method to review the studies from different digital libraries. In the next step, this work designed and answered several research questions thoroughly. The study classified several defensive strategies and discussed the major conflicting factors that can enhance robustness and efficiency. Moreover, the impact of adversarial attacks and their perturbation metrics are also analyzed for different defensive approaches. The findings of this study assist researchers and practitioners in choosing an appropriate defensive strategy by incorporating the considerations of varying research issues and recommendations. Finally, relying upon reviewed studies, this work found future directions for researchers to design robust and innovative solutions against adversarial attacks.
引用
收藏
页码:1165 / 1235
页数:71
相关论文
共 50 条
  • [31] Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks
    Panda, Priyadarshini
    Chakraborty, Indranil
    Roy, Kaushik
    IEEE ACCESS, 2019, 7 : 70157 - 70168
  • [32] CARLA-GEAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision Models
    Nesti, Federico
    Rossolini, Giulio
    D'Amico, Gianluca
    Biondi, Alessandro
    Buttazzo, Giorgio
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (08) : 9840 - 9851
  • [33] Adversarial Attacks Against Deep Generative Models on Data: A Survey
    Sun, Hui
    Zhu, Tianqing
    Zhang, Zhiqiu
    Jin, Dawei
    Xiong, Ping
    Zhou, Wanlei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 3367 - 3388
  • [34] Analyzing Adversarial Attacks against Deep Learning for Robot Navigation
    Ibn Khedher, Mohamed
    Rezzoug, Mehdi
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 1114 - 1121
  • [35] Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW
    Villegas-Ch, William
    Jaramillo-Alcazar, Angel
    Lujan-Mora, Sergio
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (01)
  • [36] RLXSS: Optimizing XSS Detection Model to Defend Against Adversarial Attacks Based on Reinforcement Learning
    Fang, Yong
    Huang, Cheng
    Xu, Yijia
    Li, Yang
    FUTURE INTERNET, 2019, 11 (08)
  • [37] Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models
    Lin, Chih-Yang
    Chen, Feng-Jie
    Ng, Hui-Fuang
    Lin, Wei-Yang
    IEEE ACCESS, 2023, 11 : 51567 - 51577
  • [38] A Survey of Adversarial Attacks: An Open Issue for Deep Learning Sentiment Analysis Models
    Vazquez-Hernandez, Monserrat
    Morales-Rosales, Luis Alberto
    Algredo-Badillo, Ignacio
    Fernandez-Gregorio, Sofia Isabel
    Rodriguez-Rangel, Hector
    Cordoba-Tlaxcalteco, Maria-Luisa
    APPLIED SCIENCES-BASEL, 2024, 14 (11):
  • [39] Deep Learning Model to Defend against Covert Channel Attacks in the SDN Networks
    Kumar, M. Anand
    Pai, Aditya H.
    Agarwal, Jyoti
    Christa, Sharon
    Prasad, Guru M. S.
    Saifi, Sadik
    2023 ADVANCED COMPUTING AND COMMUNICATION TECHNOLOGIES FOR HIGH PERFORMANCE APPLICATIONS, ACCTHPA, 2023,
  • [40] Multi-Agent Guided Deep Reinforcement Learning Approach Against State Perturbed Adversarial Attacks
    Cerci, Cagri
    Temeltas, Hakan
    IEEE ACCESS, 2024, 12 : 156146 - 156159