Targeted Data Poisoning Attacks Against Continual Learning Neural Networks

被引:2
|
作者
Li, Huayu [1 ]
Ditzler, Gregory [1 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
来源
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2022年
基金
美国国家科学基金会;
关键词
continual learning; adversarial machine learning; data poisoning attack;
D O I
10.1109/IJCNN55064.2022.9892774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (incremental) learning approaches are designed to address catastrophic forgetting in neural networks by training on batches or streaming data over time. In many real-world scenarios, the environments that generate streaming data are exposed to untrusted sources. These untrusted sources can be exposed to data poisoned by an adversary. The adversaries can manipulate and inject malicious samples into the training data. Thus, the untrusted data sources and malicious samples are meant to expose the vulnerabilities of neural networks that can lead to serious consequences in applications that require reliable performance. However, recent works on continual learning only focused on adversary agnostic scenarios without considering the possibility of data poisoning attacks. Further, recent work has demonstrated there are vulnerabilities of continual learning approaches in the presence of backdoor attacks with a relaxed constraint on manipulating data. In this paper, we focus on a more general and practical poisoning setting that artificially forces catastrophic forgetting by clean-label data poisoning attacks. We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain stealthy. The approach is benchmarked against three state-of-the-art continual learning algorithms on both domain and task incremental learning scenarios. The experiments demonstrate that the accuracy on targeted tasks significantly drops when the poisoned dataset is used in continual task learning.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] CONTINUAL LEARNING ON FACIAL RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS
    Feng, Jingjing
    Gomez, Valentina
    UNIVERSITY POLITEHNICA OF BUCHAREST SCIENTIFIC BULLETIN SERIES C-ELECTRICAL ENGINEERING AND COMPUTER SCIENCE, 2023, 85 (03): : 239 - 248
  • [32] OvA-INN: Continual Learning with Invertible Neural Networks
    Hocquet, Guillaume
    Bichler, Olivier
    Querlioz, Damien
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [33] Decentralized Optimization Resilient Against Local Data Poisoning Attacks
    Mao, Yanwen
    Data, Deepesh
    Diggavi, Suhas
    Tabuada, Paulo
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2025, 70 (01) : 81 - 96
  • [34] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2847 - 2856
  • [35] Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations
    Ororbia, Alexander
    Mali, Ankur
    Giles, C. Lee
    Kifer, Daniel
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 4267 - 4278
  • [36] Data Poisoning Attacks and Defenses in Dynamic Crowdsourcing With Online Data Quality Learning
    Zhao, Yuxi
    Gong, Xiaowen
    Lin, Fuhong
    Chen, Xu
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (05) : 2569 - 2581
  • [37] Subpopulation Data Poisoning Attacks
    Jagielski, Matthew
    Severi, Giorgio
    Harger, Niklas Pousette
    Oprea, Mina
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 3104 - 3122
  • [38] Gating Mechanism in Deep Neural Networks for Resource-Efficient Continual Learning
    Jin, Hyundong
    Yun, Kimin
    Kim, Eunwoo
    IEEE ACCESS, 2022, 10 : 18776 - 18786
  • [39] Assessing Wearable Human Activity Recognition Systems Against Data Poisoning Attacks in Differentially-Private Federated Learning
    Shahid, Abdur R.
    Imteaj, Ahmed
    Badsha, Shahriar
    Hossain, Md Zarif
    2023 IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING, SMARTCOMP, 2023, : 355 - 360
  • [40] Neural Agents with Continual Learning Capacities
    Zhinin-Vera, Luis
    Pretel, Elena
    Moya, Alejandro
    Jimenez-Ruescas, Javier
    Astudillo, Jaime
    INFORMATION AND COMMUNICATION TECHNOLOGIES, TICEC 2024, 2025, 2273 : 145 - 159