Generation and Countermeasures of adversarial examples on vision: a survey

被引:1
作者
Liu, Jiangfan [1 ,2 ]
Li, Yishan [1 ]
Guo, Yanming [1 ]
Liu, Yu [3 ]
Tang, Jun [1 ]
Nie, Ying [4 ]
机构
[1] Natl Univ Def Technol, Changsha, Peoples R China
[2] Beihang Univ, Beijing, Peoples R China
[3] Dalian Univ Technol, Dalian, Peoples R China
[4] North China Inst Comp Technol, Beijing, Peoples R China
关键词
Deep learning; Computer vision; Adversarial examples; Adversarial attacks; Adversarial defenses; DEEP NEURAL-NETWORKS; ROBUSTNESS;
D O I
10.1007/s10462-024-10841-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies have found that deep learning models are vulnerable to adversarial examples, demonstrating that applying a certain imperceptible perturbation on clean examples can effectively deceive the well-trained and high-accuracy deep learning models. Moreover, the adversarial examples can achieve a considerable level of certainty with the attacked label. In contrast, human could barely discern the difference between clean and adversarial examples, which raised tremendous concern about robust and trustworthy deep learning techniques. In this survey, we reviewed the existence, generation, and countermeasures of adversarial examples in Computer Vision, to provide comprehensive coverage of the field with an intuitive understanding of the mechanisms and summarized the strengths, weaknesses, and major challenges. We hope this effort will ignite further interest in the community to solve current challenges and explore this fundamental area.
引用
收藏
页数:48
相关论文
共 254 条
[1]   Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes [J].
Addepalli, Sravanti ;
Vivek, B. S. ;
Baburaj, Arya ;
Sriramanan, Gaurang ;
Babu, R. Venkatesh .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1017-1026
[2]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[3]   Revisiting model's uncertainty and confidences for adversarial example detection [J].
Aldahdooh, Ahmed ;
Hamidouche, Wassim ;
Deforges, Olivier .
APPLIED INTELLIGENCE, 2023, 53 (01) :509-531
[4]   Fast adversarial attacks to deep neural networks through gradual sparsification [J].
Amini, Sajjad ;
Heshmati, Alireza ;
Ghaemmaghami, Shahrokh .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
[5]   Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations [J].
Amini, Sajjad ;
Ghaemmaghami, Shahrokh .
IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) :1889-1903
[6]  
Anil C, 2019, PR MACH LEARN RES, V97
[7]  
Anish A, 2018, PMLR
[8]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[9]  
Bai T., 2021, INT JOINT C ART INT
[10]  
Baluja S, 2017, Arxiv, DOI arXiv:1703.09387