How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses

被引:25
作者
Costa, Joana C. [1 ]
Roxo, Tiago [2 ]
Proenca, Hugo
Inacio, Pedro Ricardo Morais
机构
[1] Univ Beira Interior, Sins Lab, Inst Telecomunicacoes, P-6201001 Covilha, Portugal
[2] Univ Beira Interior, Dept Comp Sci, P-6201001 Covilha, Portugal
关键词
Surveys; Transformers; Perturbation methods; Object recognition; Deep learning; Closed box; Vectors; Adversarial attacks; adversarial defenses; datasets; evaluation metrics; review; vision transformers; RECOGNITION; VISION;
D O I
10.1109/ACCESS.2024.3395118
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Learning is currently used to perform multiple tasks, such as object recognition, face recognition, and natural language processing. However, Deep Neural Networks (DNNs) are vulnerable to perturbations that alter the network prediction, named adversarial examples, which raise concerns regarding the usage of DNNs in critical areas, such as Self-driving Vehicles, Malware Detection, and Healthcare. This paper compiles the most recent adversarial attacks in Object Recognition, grouped by the attacker capacity and knowledge, and modern defenses clustered by protection strategies, providing background details to understand the topic of adversarial attacks and defenses. The new advances regarding Vision Transformers are also presented, which have not been previously done in the literature, showing the resemblance and dissimilarity between this architecture and Convolutional Neural Networks. Furthermore, the most used datasets and metrics in adversarial settings are summarized, along with datasets requiring further evaluation, which is another contribution. This survey compares the state-of-the-art results under different attacks for multiple architectures and compiles all the adversarial attacks and defenses with available code, comprising significant contributions to the literature. Finally, practical applications are discussed, and open issues are identified, being a reference for future works.
引用
收藏
页码:61113 / 61136
页数:24
相关论文
共 245 条
[1]   Adversarial Example Detection Using Latent Neighborhood Graph [J].
Abusnaina, Ahmed ;
Wu, Yuhang ;
Arora, Sunpreet ;
Wang, Yizhen ;
Wang, Fei ;
Yang, Hao ;
Mohaisen, David .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :7667-7676
[2]  
Addepalli S., 2021, ICML WORKSHOP ADVERS, P1
[3]   Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes [J].
Addepalli, Sravanti ;
Vivek, B. S. ;
Baburaj, Arya ;
Sriramanan, Gaurang ;
Babu, R. Venkatesh .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1017-1026
[4]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[5]  
Alayrac J.-B., 2019, P ADV NEUR INF PROC, V32, P110
[6]  
Aldahdooh A., 2021, arXiv
[7]   Adversarial NLP for Social Network Applications: Attacks, Defenses, and Research Directions [J].
Alsmadi, Izzat ;
Ahmad, Kashif ;
Nazzal, Mahmoud ;
Alam, Firoj ;
Al-Fuqaha, Ala ;
Khreishah, Abdallah ;
Algosaibi, Abdulelah .
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (06) :3089-3108
[8]  
Amritasree L., 2023, SSRN Electron. J., DOI [10.2139/ssrn.4607819, DOI 10.2139/SSRN.4607819]
[9]  
Andriushchenko F., EUR C COMPUT VIS
[10]  
[Anonymous], 2011, P NIPS WORKSH DEEP L