How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic Review

被引:0
|
作者
Dhamija, Lovi [1 ]
Bansal, Urvashi [1 ]
机构
[1] Dr BR Ambedkar Natl Inst Technol, Jalandhar, Punjab, India
关键词
Deep learning; Adversarial attacks; Generative adversarial networks; Robustness; Transferability; Generalizability; NEURAL-NETWORKS; ROBUSTNESS; EXAMPLES;
D O I
10.1007/s00354-024-00283-0
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning plays a significant role in developing a robust and constructive framework for tackling complex learning tasks. Consequently, it is widely utilized in many security-critical contexts, such as Self-Driving and Biometric Systems. Due to their complex structure, Deep Neural Networks (DNN) are vulnerable to adversarial attacks. Adversaries can deploy attacks at training or testing time and can cause significant security risks in safety-critical applications. Therefore, it is essential to comprehend adversarial attacks, their crafting methods, and different defending strategies. Moreover, finding effective defenses to malicious attacks that can promote robustness and provide additional security in deep learning models is critical. Therefore, there is a need to analyze the different challenges concerning deep learning models' robustness. The proposed work aims to present a systematic review of primary studies that focuses on providing an efficient and robust framework against adversarial attacks. This work used a standard SLR (Systematic Literature Review) method to review the studies from different digital libraries. In the next step, this work designed and answered several research questions thoroughly. The study classified several defensive strategies and discussed the major conflicting factors that can enhance robustness and efficiency. Moreover, the impact of adversarial attacks and their perturbation metrics are also analyzed for different defensive approaches. The findings of this study assist researchers and practitioners in choosing an appropriate defensive strategy by incorporating the considerations of varying research issues and recommendations. Finally, relying upon reviewed studies, this work found future directions for researchers to design robust and innovative solutions against adversarial attacks.
引用
收藏
页码:1165 / 1235
页数:71
相关论文
共 50 条
  • [21] Deep Learning Defense Method Against Adversarial Attacks
    Wang, Ling
    Zhang, Cheng
    Liu, Jie
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3667 - 3671
  • [22] Transferability of Adversarial Attacks on Tiny Deep Learning Models for IoT Unmanned Aerial Vehicles
    Zhou, Shan
    Huang, Xianting
    Obaidat, Mohammad S.
    Alzahrani, Bander A.
    Han, Xuming
    Kumari, Saru
    Chen, Chien-Ming
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12): : 21037 - 21045
  • [23] How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses
    Costa, Joana C.
    Roxo, Tiago
    Proenca, Hugo
    Inacio, Pedro Ricardo Morais
    IEEE ACCESS, 2024, 12 : 61113 - 61136
  • [24] Adversarial attacks on deep learning models in smart grids
    Hao, Jingbo
    Tao, Yang
    ENERGY REPORTS, 2022, 8 : 123 - 129
  • [25] Addressing Adversarial Attacks in IoT Using Deep Learning AI Models
    Bommana, Sesibhushana Rao
    Veeramachaneni, Sreehari
    Ahmed, Syed Ershad
    Srinivas, M. B.
    IEEE ACCESS, 2025, 13 : 50437 - 50449
  • [26] Beyond accuracy and precision: a robust deep learning framework to enhance the resilience of face mask detection models against adversarial attacks
    Burhan Ul Haque sheikh
    Aasim Zafar
    Evolving Systems, 2024, 15 : 1 - 24
  • [27] Defending AI Models Against Adversarial Attacks in Smart Grids Using Deep Learning
    Sampedro, Gabriel Avelino
    Ojo, Stephen
    Krichen, Moez
    Alamro, Meznah A.
    Mihoub, Alaeddine
    Karovic, Vincent
    IEEE ACCESS, 2024, 12 : 157408 - 157417
  • [28] Beyond accuracy and precision: a robust deep learning framework to enhance the resilience of face mask detection models against adversarial attacks
    Sheikh, Burhan Ul Haque
    Zafar, Aasim
    EVOLVING SYSTEMS, 2024, 15 (01) : 1 - 24
  • [29] Deep learning adversarial attacks and defenses in autonomous vehicles: a systematic literature review from a safety perspective
    Ibrahum, Ahmed Dawod Mohammed
    Hussain, Manzoor
    Hong, Jang-Eui
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 58 (01)
  • [30] Secure Collaborative Deep Learning Against GAN Attacks in the Internet of Things
    Chen, Zhenzhu
    Fu, Anmin
    Zhang, Yinghui
    Liu, Zhe
    Zeng, Fanjian
    Deng, Robert H.
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (07) : 5839 - 5849