How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic Review

被引:0
作者
Dhamija, Lovi [1 ]
Bansal, Urvashi [1 ]
机构
[1] Dr BR Ambedkar Natl Inst Technol, Jalandhar, Punjab, India
关键词
Deep learning; Adversarial attacks; Generative adversarial networks; Robustness; Transferability; Generalizability; NEURAL-NETWORKS; ROBUSTNESS; EXAMPLES;
D O I
10.1007/s00354-024-00283-0
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning plays a significant role in developing a robust and constructive framework for tackling complex learning tasks. Consequently, it is widely utilized in many security-critical contexts, such as Self-Driving and Biometric Systems. Due to their complex structure, Deep Neural Networks (DNN) are vulnerable to adversarial attacks. Adversaries can deploy attacks at training or testing time and can cause significant security risks in safety-critical applications. Therefore, it is essential to comprehend adversarial attacks, their crafting methods, and different defending strategies. Moreover, finding effective defenses to malicious attacks that can promote robustness and provide additional security in deep learning models is critical. Therefore, there is a need to analyze the different challenges concerning deep learning models' robustness. The proposed work aims to present a systematic review of primary studies that focuses on providing an efficient and robust framework against adversarial attacks. This work used a standard SLR (Systematic Literature Review) method to review the studies from different digital libraries. In the next step, this work designed and answered several research questions thoroughly. The study classified several defensive strategies and discussed the major conflicting factors that can enhance robustness and efficiency. Moreover, the impact of adversarial attacks and their perturbation metrics are also analyzed for different defensive approaches. The findings of this study assist researchers and practitioners in choosing an appropriate defensive strategy by incorporating the considerations of varying research issues and recommendations. Finally, relying upon reviewed studies, this work found future directions for researchers to design robust and innovative solutions against adversarial attacks.
引用
收藏
页码:1165 / 1235
页数:71
相关论文
共 293 条
[61]  
Feinman R., 2017, ARXIV
[62]   Robustness Verification Boosting for Deep Neural Networks [J].
Feng, Chendong .
2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019), 2019, :531-535
[63]  
Folz J, 2020, IEEE WINT CONF APPL, P3568, DOI [10.1109/WACV45572.2020.9093310, 10.1109/wacv45572.2020.9093310]
[64]  
Freitas S., 2020, ARXIV
[65]   Adversarial Perturbations Fool Deepfake Detectors [J].
Gandhi, Apurva ;
Jain, Shomik .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[66]  
Gao J, 2017, ArXiv
[67]  
Gao Y., 2020, ARXIV
[68]  
Ghosh P, 2019, AAAI CONF ARTIF INTE, P541
[69]  
github, GitHub-jason71995/adversarialattack: Adversarial Attack on Keras and Tensorflow 2.0
[70]  
github, GitHub-Trusted-AI/adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART)-Python Library for Machine Learning Security-Evasion, Poisoning, Extraction, Inference-Red and Blue Teams