How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses

被引:9
|
作者
Costa, Joana C. [1 ]
Roxo, Tiago [2 ]
Proenca, Hugo
Inacio, Pedro Ricardo Morais
机构
[1] Univ Beira Interior, Sins Lab, Inst Telecomunicacoes, P-6201001 Covilha, Portugal
[2] Univ Beira Interior, Dept Comp Sci, P-6201001 Covilha, Portugal
关键词
Surveys; Transformers; Perturbation methods; Object recognition; Deep learning; Closed box; Vectors; Adversarial attacks; adversarial defenses; datasets; evaluation metrics; review; vision transformers; RECOGNITION; VISION;
D O I
10.1109/ACCESS.2024.3395118
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Learning is currently used to perform multiple tasks, such as object recognition, face recognition, and natural language processing. However, Deep Neural Networks (DNNs) are vulnerable to perturbations that alter the network prediction, named adversarial examples, which raise concerns regarding the usage of DNNs in critical areas, such as Self-driving Vehicles, Malware Detection, and Healthcare. This paper compiles the most recent adversarial attacks in Object Recognition, grouped by the attacker capacity and knowledge, and modern defenses clustered by protection strategies, providing background details to understand the topic of adversarial attacks and defenses. The new advances regarding Vision Transformers are also presented, which have not been previously done in the literature, showing the resemblance and dissimilarity between this architecture and Convolutional Neural Networks. Furthermore, the most used datasets and metrics in adversarial settings are summarized, along with datasets requiring further evaluation, which is another contribution. This survey compares the state-of-the-art results under different attacks for multiple architectures and compiles all the adversarial attacks and defenses with available code, comprising significant contributions to the literature. Finally, practical applications are discussed, and open issues are identified, being a reference for future works.
引用
收藏
页码:61113 / 61136
页数:24
相关论文
共 50 条
  • [31] Unmasking the Vulnerabilities of Deep Learning Models: A Multi-Dimensional Analysis of Adversarial Attacks and Defenses
    Juraev, Firuz
    Abuhamad, Mohammed
    Chan-Tin, Eric
    Thiruvathukal, George K.
    Abuhmed, Tamer
    2024 SILICON VALLEY CYBERSECURITY CONFERENCE, SVCC 2024, 2024,
  • [32] Robustness and Security in Deep Learning: Adversarial Attacks and Countermeasures
    Kaur, Navjot
    Singh, Someet
    Deore, Shailesh Shivaji
    Vidhate, Deepak A.
    Haridas, Divya
    Kosuri, Gopala Varma
    Kolhe, Mohini Ravindra
    JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (03) : 1250 - 1257
  • [33] Physical Adversarial Attacks for Surveillance: A Survey
    Nguyen, Kien
    Fernando, Tharindu
    Fookes, Clinton
    Sridharan, Sridha
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 17036 - 17056
  • [34] Adversarial attacks and defenses on text-to-image diffusion models: A survey
    Zhang, Chenyu
    Hu, Mingwang
    Li, Wenhui
    Wang, Lanjun
    INFORMATION FUSION, 2025, 114
  • [35] Hardening Interpretable Deep Learning Systems: Investigating Adversarial Threats and Defenses
    Abdukhamidov, Eldor
    Abuhamad, Mohammed
    Woo, Simon S.
    Chan-Tin, Eric
    Abuhmed, Tamer
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 3963 - 3976
  • [36] A Survey of Adversarial Defenses and Robustness in NLP
    Goyal, Shreya
    Doddapaneni, Sumanth
    Khapra, Mitesh M.
    Ravindran, Balaraman
    ACM COMPUTING SURVEYS, 2023, 55 (14S)
  • [37] Evaluating the Effectiveness of Attacks and Defenses on Machine Learning Through Adversarial Samples
    Gala, Viraj R.
    Schneider, Martin A.
    2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS, ICSTW, 2023, : 90 - 97
  • [38] Ensemble Adversarial Defenses and Attacks in Speaker Verification Systems
    Chen, Zesheng
    Li, Jack
    Chen, Chao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (20): : 32645 - 32655
  • [39] The Impact of Adversarial Attacks on Federated Learning: A Survey
    Kumar, Kummari Naveen
    Mohan, Chalavadi Krishna
    Cenkeramaddi, Linga Reddy
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2672 - 2691
  • [40] Adversarial examples: attacks and defences on medical deep learning systems
    Puttagunta, Murali Krishna
    Ravi, S.
    Babu, C. Nelson Kennedy
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (22) : 33773 - 33809