DATA POISONING ATTACK AIMING THE VULNERABILITY OF CONTINUAL LEARNING

被引:2
作者
Han, Gyojin [1 ]
Choi, Jaehyun [1 ]
Hong, Hyeong Gwon [2 ]
Kim, Junmo [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon, South Korea
[2] Korea Adv Inst Sci & Technol, Kim Jaechul Grad Sch, Daejeon, South Korea
来源
2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP | 2023年
关键词
Data poisoning; continual learning; catastrophic forgetting;
D O I
10.1109/ICIP49359.2023.10222168
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generally, regularization-based continual learning models limit access to the previous task data to imitate the real-world constraints related to memory and privacy. However, this introduces a problem in these models by not being able to track the performance on each task. In essence, current continual learning methods are susceptible to attacks on previous tasks. We demonstrate the vulnerability of regularization-based continual learning methods by presenting a simple task-specific data poisoning attack that can be used in the learning process of a new task. Training data generated by the proposed attack causes performance degradation on a specific task targeted by the attacker. We experiment with the attack on the two representative regularization-based continual learning methods, Elastic Weight Consolidation (EWC) and Synaptic Intelligence (SI), trained with variants of MNIST dataset. The experiment results justify the vulnerability proposed in this paper and demonstrate the importance of developing continual learning models that are robust to adversarial attacks.
引用
收藏
页码:1905 / 1909
页数:5
相关论文
共 21 条
[1]  
[Anonymous], ADV NEURAL INFORM PR
[2]   Defending Against Universal Attacks Through Selective Feature Regeneration [J].
Borkar, Tejas ;
Heide, Felix ;
Karam, Lina .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :706-716
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]  
Chen B., 2018, Synth. Lect. Artif. Intell. Mach. Learn., V12, P1, DOI DOI 10.2200/S00737ED1V01Y201610AIM033
[5]  
Feng J., 2019, Advances in Neural Information Processing Systems, P11971
[6]  
Fernando Chrisantha, 2017, arXiv
[7]   Catastrophic forgetting in connectionist networks [J].
French, RM .
TRENDS IN COGNITIVE SCIENCES, 1999, 3 (04) :128-135
[8]  
Goodfellow IJ, 2014, ARXIV14126572
[9]  
Hinton G., 2015, ARXIV
[10]   Overcoming catastrophic forgetting in neural networks [J].
Kirkpatricka, James ;
Pascanu, Razvan ;
Rabinowitz, Neil ;
Veness, Joel ;
Desjardins, Guillaume ;
Rusu, Andrei A. ;
Milan, Kieran ;
Quan, John ;
Ramalho, Tiago ;
Grabska-Barwinska, Agnieszka ;
Hassabis, Demis ;
Clopath, Claudia ;
Kumaran, Dharshan ;
Hadsell, Raia .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (13) :3521-3526