Backdoor Attacks to Deep Neural Networks: A Survey of the Literature, Challenges, and Future Research Directions

被引:5
作者
Mengara, Orson [1 ]
Avila, Anderson [1 ,2 ]
Falk, Tiago H. [1 ,2 ]
机构
[1] Univ Quebec, INRS EMT, Montreal, PQ H5A 1K6, Canada
[2] INRS UQO Joint Res Unit Cybersecur, Gatineau, PQ, Canada
关键词
Artificial neural networks; Data models; Training; Surveys; Trojan horses; Training data; Deep learning; Detection algorithms; Computer security; Backdoor attack; deep learning; vulnerability detection; trojan attack; neural trojan; ADVERSARIAL ATTACKS; DETECTING BACKDOOR; POISONING ATTACKS; TROJAN ATTACKS; DEFENSE; CLASSIFICATION; SECURITY;
D O I
10.1109/ACCESS.2024.3355816
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN) classifiers are potent instruments that can be used in various security-sensitive applications. Nonetheless, they are vulnerable to certain attacks that impede or distort their learning process. For example, backdoor attacks involve polluting the DNN learning set with a few samples from one or more source classes, which are then labeled as target classes by an attacker. Even if the DNN is trained on clean samples with no backdoors, this attack will still be successful if a backdoor pattern exists in the training data. Backdoor attacks are difficult to spot and can be used to make the DNN behave maliciously, depending on the target selected by the attacker. In this study, we survey the literature and highlight the latest advances in backdoor attack strategies and defense mechanisms. We finalize the discussion on challenges and open issues, as well as future research opportunities.
引用
收藏
页码:29004 / 29023
页数:20
相关论文
共 314 条
[1]  
Abaid Z, 2017, 2017 IEEE 16TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA), P375
[2]  
Adam-Bourdarios C., 2014, P 2014 INT C HIGH EN, P19
[3]   VENOMAVE: Targeted Poisoning Against Speech Recognition [J].
Aghakhani, Hojjat ;
Schoenherr, Sch ;
Eisenhofer, Thorsten ;
Kolossa, Dorothea ;
Holz, Thorsten ;
Kruegel, Christopher ;
Vigna, Giovanni .
2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, :404-417
[4]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[5]  
Alam M., 2019, P 56 ACM IEEE DES AU, P1
[6]   Triggerability of Backdoor Attacks in Multi-Source Transfer Learning-based Intrusion Detection [J].
Alhussien, Nour ;
Aleroud, Ahmed ;
Rahaeimehr, Reza ;
Schwarzmann, Alexander .
2022 IEEE/ACM INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING, APPLICATIONS AND TECHNOLOGIES, BDCAT, 2022, :40-47
[7]  
Ali H., 2023, IEEE Access, V11
[8]   Cyber Attack Detection for Self-Driving Vehicle Networks Using Deep Autoencoder Algorithms [J].
Alsaade, Fawaz Waselallah ;
Al-Adhaileh, Mosleh Hmoud .
SENSORS, 2023, 23 (08)
[9]  
[Anonymous], 2008, P 25 INT C MACH LEAR
[10]  
Ashcraft C., 2021, arXiv, DOI DOI 10.48550/ARXIV.2106.07798