Targeted Data Poisoning Attacks Against Continual Learning Neural Networks

被引:2
|
作者
Li, Huayu [1 ]
Ditzler, Gregory [1 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
来源
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2022年
基金
美国国家科学基金会;
关键词
continual learning; adversarial machine learning; data poisoning attack;
D O I
10.1109/IJCNN55064.2022.9892774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (incremental) learning approaches are designed to address catastrophic forgetting in neural networks by training on batches or streaming data over time. In many real-world scenarios, the environments that generate streaming data are exposed to untrusted sources. These untrusted sources can be exposed to data poisoned by an adversary. The adversaries can manipulate and inject malicious samples into the training data. Thus, the untrusted data sources and malicious samples are meant to expose the vulnerabilities of neural networks that can lead to serious consequences in applications that require reliable performance. However, recent works on continual learning only focused on adversary agnostic scenarios without considering the possibility of data poisoning attacks. Further, recent work has demonstrated there are vulnerabilities of continual learning approaches in the presence of backdoor attacks with a relaxed constraint on manipulating data. In this paper, we focus on a more general and practical poisoning setting that artificially forces catastrophic forgetting by clean-label data poisoning attacks. We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain stealthy. The approach is benchmarked against three state-of-the-art continual learning algorithms on both domain and task incremental learning scenarios. The experiments demonstrate that the accuracy on targeted tasks significantly drops when the poisoned dataset is used in continual task learning.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Efficient continual learning in neural networks with embedding regularization
    Pomponi, Jary
    Scardapane, Simone
    Lomonaco, Vincenzo
    Uncini, Aurelio
    NEUROCOMPUTING, 2020, 397 : 139 - 148
  • [22] Streaming Graph Neural Networks via Continual Learning
    Wang, Junshan
    Song, Guojie
    Wu, Yi
    Wang, Liang
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 1515 - 1524
  • [23] CONTRA: Defending Against Poisoning Attacks in Federated Learning
    Awan, Sana
    Luo, Bo
    Li, Fengjun
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 455 - 475
  • [24] Continual learning for recurrent neural networks: An empirical evaluation
    Cossu, Andrea
    Carta, Antonio
    Lomonaco, Vincenzo
    Bacciu, Davide
    NEURAL NETWORKS, 2021, 143 : 607 - 627
  • [25] Defending Quantum Neural Networks against Adversarial Attacks with Homomorphic Data Encryption
    Wang, Ellen
    Chain, Helena
    Wang, Xiaodi
    Ray, Avi
    Wooldridge, Tyler
    2023 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE, CSCI 2023, 2023, : 816 - 822
  • [26] Federated Learning Under Attack: Exposing Vulnerabilities Through Data Poisoning Attacks in Computer Networks
    Nowroozi, Ehsan
    Haider, Imran
    Taheri, Rahim
    Conti, Mauro
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2025, 22 (01): : 822 - 831
  • [27] Model poisoning attacks against distributed machine learning systems
    Tomsett, Richard
    Chan, Kevin
    Chakraborty, Supriyo
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [28] Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things
    Dunn, Corey
    Moustafa, Nour
    Turnbull, Benjamin
    SUSTAINABILITY, 2020, 12 (16)
  • [29] Mixed Strategy Game Model Against Data Poisoning Attacks
    Ou, Yifan
    Samavi, Reza
    2019 49TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS (DSN-W), 2019, : 39 - 43
  • [30] Continual Learning in Convolutional Neural Networks with Tensor Rank Updates
    Krol, Matt
    Hyder, Rakib
    Peechatt, Michael
    Prater-Bennette, Ashley
    Asif, M. Salman
    Markopoulos, Panos P.
    2024 IEEE 13RD SENSOR ARRAY AND MULTICHANNEL SIGNAL PROCESSING WORKSHOP, SAM 2024, 2024,