How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic Review

被引:0
作者
Dhamija, Lovi [1 ]
Bansal, Urvashi [1 ]
机构
[1] Dr BR Ambedkar Natl Inst Technol, Jalandhar, Punjab, India
关键词
Deep learning; Adversarial attacks; Generative adversarial networks; Robustness; Transferability; Generalizability; NEURAL-NETWORKS; ROBUSTNESS; EXAMPLES;
D O I
10.1007/s00354-024-00283-0
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning plays a significant role in developing a robust and constructive framework for tackling complex learning tasks. Consequently, it is widely utilized in many security-critical contexts, such as Self-Driving and Biometric Systems. Due to their complex structure, Deep Neural Networks (DNN) are vulnerable to adversarial attacks. Adversaries can deploy attacks at training or testing time and can cause significant security risks in safety-critical applications. Therefore, it is essential to comprehend adversarial attacks, their crafting methods, and different defending strategies. Moreover, finding effective defenses to malicious attacks that can promote robustness and provide additional security in deep learning models is critical. Therefore, there is a need to analyze the different challenges concerning deep learning models' robustness. The proposed work aims to present a systematic review of primary studies that focuses on providing an efficient and robust framework against adversarial attacks. This work used a standard SLR (Systematic Literature Review) method to review the studies from different digital libraries. In the next step, this work designed and answered several research questions thoroughly. The study classified several defensive strategies and discussed the major conflicting factors that can enhance robustness and efficiency. Moreover, the impact of adversarial attacks and their perturbation metrics are also analyzed for different defensive approaches. The findings of this study assist researchers and practitioners in choosing an appropriate defensive strategy by incorporating the considerations of varying research issues and recommendations. Finally, relying upon reviewed studies, this work found future directions for researchers to design robust and innovative solutions against adversarial attacks.
引用
收藏
页码:1165 / 1235
页数:71
相关论文
共 293 条
[1]  
Adam GA, 2018, Arxiv, DOI arXiv:1808.06645
[2]   Ally patches for spoliation of adversarial patches [J].
Abdel-Hakim, Alaa E. .
JOURNAL OF BIG DATA, 2019, 6 (01)
[3]  
Adry A.M., Towards deep learning models resistant to adversarial attacks
[4]   Cognitive data augmentation for adversarial defense via pixel masking [J].
Agarwal, Akshay ;
Vatsa, Mayank ;
Singh, Richa ;
Ratha, Nalini .
PATTERN RECOGNITION LETTERS, 2021, 146 :244-251
[5]  
Agarwal C, 2019, IEEE IMAGE PROC, P3801, DOI [10.1109/ICIP.2019.8803601, 10.1109/icip.2019.8803601]
[6]  
Agrawal R., 2019, ArXiv, P1
[7]   Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal ;
Kardan, Navid ;
Shah, Mubarak .
IEEE ACCESS, 2021, 9 :155161-155196
[8]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[9]  
Al-Rfou R., 2016, arXiv
[10]   Adversarial Robustness by One Bit Double Quantization for Visual Classification [J].
Aprilpyone, Maungmaung ;
Kinoshita, Yuma ;
Kiya, Hitoshi .
IEEE ACCESS, 2019, 7 :177932-177943