Targeted Data Poisoning Attacks Against Continual Learning Neural Networks

被引:2
|
作者
Li, Huayu [1 ]
Ditzler, Gregory [1 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
来源
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2022年
基金
美国国家科学基金会;
关键词
continual learning; adversarial machine learning; data poisoning attack;
D O I
10.1109/IJCNN55064.2022.9892774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (incremental) learning approaches are designed to address catastrophic forgetting in neural networks by training on batches or streaming data over time. In many real-world scenarios, the environments that generate streaming data are exposed to untrusted sources. These untrusted sources can be exposed to data poisoned by an adversary. The adversaries can manipulate and inject malicious samples into the training data. Thus, the untrusted data sources and malicious samples are meant to expose the vulnerabilities of neural networks that can lead to serious consequences in applications that require reliable performance. However, recent works on continual learning only focused on adversary agnostic scenarios without considering the possibility of data poisoning attacks. Further, recent work has demonstrated there are vulnerabilities of continual learning approaches in the presence of backdoor attacks with a relaxed constraint on manipulating data. In this paper, we focus on a more general and practical poisoning setting that artificially forces catastrophic forgetting by clean-label data poisoning attacks. We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain stealthy. The approach is benchmarked against three state-of-the-art continual learning algorithms on both domain and task incremental learning scenarios. The experiments demonstrate that the accuracy on targeted tasks significantly drops when the poisoned dataset is used in continual task learning.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Poisoning Attacks on Fair Machine Learning
    Minh-Hao Van
    Du, Wei
    Wu, Xintao
    Lu, Aidong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT I, 2022, : 370 - 386
  • [42] Dirichlet Prior Networks for Continual Learning
    Wiewel, Felix
    Bartler, Alexander
    Yang, Bin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [43] Continual and One-Shot Learning Through Neural Networks with Dynamic External Memory
    Luders, Benno
    Schlager, Mikkel
    Korach, Aleksandra
    Risi, Sebastian
    APPLICATIONS OF EVOLUTIONARY COMPUTATION, EVOAPPLICATIONS 2017, PT I, 2017, 10199 : 886 - 901
  • [44] De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
    Chen, Jian
    Zhang, Xuxin
    Zhang, Rui
    Wang, Chen
    Liu, Ling
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 (16) : 3412 - 3425
  • [45] A Novel Data Sanitization Method Based on Dynamic Dataset Partition and Inspection Against Data Poisoning Attacks
    Lee, Jaehyun
    Cho, Youngho
    Lee, Ryungeon
    Yuk, Simon
    Youn, Jaepil
    Park, Hansol
    Shin, Dongkyoo
    ELECTRONICS, 2025, 14 (02):
  • [46] Improving the Detection of Unknown DDoS Attacks through Continual Learning
    Nugraha, Beny
    Yadav, Krishna
    Patil, Parag
    Bauschert, Thomas
    2023 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2023, : 477 - 483
  • [47] Certified Continual Learning for Neural Network Regression
    Pham, Long H.
    Sun, Jun
    PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024, 2024, : 806 - 818
  • [48] Attacks against Machine Learning Models in 5G Networks
    Zolotukhin, Mikhail
    Zhang, Di
    Miraghaie, Parsa
    Hamalainen, Timo
    Ke, Wang
    Dunderfelt, Marja
    2022 6TH EUROPEAN CONFERENCE ON ELECTRICAL ENGINEERING & COMPUTER SCIENCE, ELECS, 2022, : 106 - 114
  • [49] Adversarial Poisoning Attacks on Federated Learning in Metaverse
    Aristodemou, Marios
    Liu, Xiaolan
    Lambotharan, Sangarapillai
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 6312 - 6317
  • [50] Power Law in Deep Neural Networks: Sparse Network Generation and Continual Learning With Preferential Attachment
    Feng, Fan
    Hou, Lu
    She, Qi
    Chan, Rosa H. M.
    Kwok, James T.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (07) : 8999 - 9013