How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic Review

被引:0
|
作者
Dhamija, Lovi [1 ]
Bansal, Urvashi [1 ]
机构
[1] Dr BR Ambedkar Natl Inst Technol, Jalandhar, Punjab, India
关键词
Deep learning; Adversarial attacks; Generative adversarial networks; Robustness; Transferability; Generalizability; NEURAL-NETWORKS; ROBUSTNESS; EXAMPLES;
D O I
10.1007/s00354-024-00283-0
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning plays a significant role in developing a robust and constructive framework for tackling complex learning tasks. Consequently, it is widely utilized in many security-critical contexts, such as Self-Driving and Biometric Systems. Due to their complex structure, Deep Neural Networks (DNN) are vulnerable to adversarial attacks. Adversaries can deploy attacks at training or testing time and can cause significant security risks in safety-critical applications. Therefore, it is essential to comprehend adversarial attacks, their crafting methods, and different defending strategies. Moreover, finding effective defenses to malicious attacks that can promote robustness and provide additional security in deep learning models is critical. Therefore, there is a need to analyze the different challenges concerning deep learning models' robustness. The proposed work aims to present a systematic review of primary studies that focuses on providing an efficient and robust framework against adversarial attacks. This work used a standard SLR (Systematic Literature Review) method to review the studies from different digital libraries. In the next step, this work designed and answered several research questions thoroughly. The study classified several defensive strategies and discussed the major conflicting factors that can enhance robustness and efficiency. Moreover, the impact of adversarial attacks and their perturbation metrics are also analyzed for different defensive approaches. The findings of this study assist researchers and practitioners in choosing an appropriate defensive strategy by incorporating the considerations of varying research issues and recommendations. Finally, relying upon reviewed studies, this work found future directions for researchers to design robust and innovative solutions against adversarial attacks.
引用
收藏
页码:1165 / 1235
页数:71
相关论文
共 50 条
  • [41] Towards Resilient and Secure Smart Grids against PMU Adversarial Attacks: A Deep Learning-Based Robust Data Engineering Approach
    Berghout, Tarek
    Benbouzid, Mohamed
    Amirat, Yassine
    ELECTRONICS, 2023, 12 (12)
  • [42] ACADIA: Efficient and Robust Adversarial Attacks Against Deep Reinforcement Learning
    Ali, Haider
    Al Ameedi, Mohannad
    Swami, Ananthram
    Ning, Rui
    Li, Jiang
    Wu, Hongyi
    Cho, Jin-Hee
    2022 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY (CNS), 2022, : 1 - 9
  • [43] Leveraging Deep Learning for Computer Vision: A Review
    Alam, Ekram
    Abu Sufian
    Das, Akhil Kumar
    Bhattacharya, Arijit
    Ali, Md Firoj
    Rahman, M. M. Hafizur
    2021 22ND INTERNATIONAL ARAB CONFERENCE ON INFORMATION TECHNOLOGY (ACIT), 2021, : 298 - 305
  • [44] Robust Deep Learning Models against Semantic-Preserving Adversarial Attack
    Zhao, Yunce
    Gao, Dashan
    Yao, Yinghua
    Zhang, Zeqi
    Mao, Bifei
    Yao, Xin
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [45] DOME-T: Adversarial Computer Vision Attack on Deep Learning Models Based on Tchebichef Image Moments
    Maliamanis, T.
    Papakostas, G. A.
    THIRTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2020), 2021, 11605
  • [46] Ensemble adversarial black-box attacks against deep learning systems
    Hang, Jie
    Han, Keji
    Chen, Hui
    Li, Yun
    PATTERN RECOGNITION, 2020, 101
  • [47] Detection of adversarial attacks against security systems based on deep learning model
    Jaber, Mohanad J.
    Jaber, Zahraa Jasim
    Obaid, Ahmed J.
    JOURNAL OF DISCRETE MATHEMATICAL SCIENCES & CRYPTOGRAPHY, 2024, 27 (05) : 1523 - 1538
  • [48] Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection
    Imran, Muhammad
    Appice, Annalisa
    Malerba, Donato
    FUTURE INTERNET, 2024, 16 (05)
  • [49] Forming Adversarial Example Attacks Against Deep Neural Networks With Reinforcement Learning
    Akers, Matthew
    Barton, Armon
    COMPUTER, 2024, 57 (01) : 88 - 99
  • [50] Evasion Attacks with Adversarial Deep Learning Against Power System State Estimation
    Sayghe, Ali
    Zhao, Junbo
    Konstantinou, Charalambos
    2020 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2020,