Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

被引:3
作者
Umer, Muhammad [1 ]
Polikar, Robi [1 ]
机构
[1] Rowan Univ, Dept Elect & Comp Engn, Glassboro, NJ 08028 USA
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
关键词
D O I
10.1109/IJCNN52387.2021.9533400
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (or "incremental") learning approaches are employed when additional knowledge or tasks need to be learned from subsequent batches or from streaming data. However these approaches are typically adversary agnostic, i.e., they do not consider the possibility of a malicious attack. In our prior work, we explored the vulnerabilities of Elastic Weight Consolidation (EWC) to the perceptible misinformation. We now explore the vulnerabilities of other regularization-based as well as generative replay-based continual learning algorithms, and also extend the attack to imperceptible misinformation. We show that an intelligent adversary can take advantage of a continual learning algorithm's capabilities of retaining existing knowledge over time, and force it to learn and retain deliberately introduced misinformation. To demonstrate this vulnerability, we inject backdoor attack samples into the training data. These attack samples constitute the misinformation, allowing the attacker to capture control of the model at test time. We evaluate the extent of this vulnerability on both rotated and split benchmark variants of the MNIST dataset under two important domain and class incremental learning scenarios. We show that the adversary can create a "false memory" about any task by inserting carefully-designed backdoor samples to the test instances of that task thereby controlling the amount of forgetting of any task of its choosing. Perhaps most importantly, we show this vulnerability to be very acute and damaging: the model memory can be easily compromised with the addition of backdoor samples into as little as 1% of the training data, even when the misinformation is imperceptible to human eye.
引用
收藏
页数:8
相关论文
共 50 条
[31]   Catastrophic Forgetting in Continual Concept Bottleneck Models [J].
Marconato, Emanuele ;
Bontempo, Gianpaolo ;
Teso, Stefano ;
Ficarra, Elisa ;
Calderara, Simone ;
Passerini, Andrea .
IMAGE ANALYSIS AND PROCESSING, ICIAP 2022 WORKSHOPS, PT II, 2022, 13374 :539-547
[32]   Deep capsule network regularization based on generative adversarial network framework [J].
Sun, Kun ;
Xu, Haixia ;
Yuan, Liming ;
Wen, Xianbin .
JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (04)
[33]   Soft -Margin hilipsoid Generative Adversarial Networks Based on Gradient Regularization [J].
Jiang, Zheng ;
Liu, Bin ;
Huang, Weihua .
2022 IEEE 17TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2022, :1475-1480
[34]   A Recommender System Based on Model Regularization Wasserstein Generative Adversarial Network [J].
Wang, Qingxian ;
Huang, Qing ;
Ma, Kangkang ;
Zhang, Xuerui .
2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, :2043-2048
[35]   Adapt & Align: Continual Learning with Generative Models' Latent Space Alignment [J].
Deja, Kamil ;
Cywinski, Bartosz ;
Rybarczyk, Jan ;
Trzcinski, Tomasz .
NEUROCOMPUTING, 2025, 650
[36]   Continual Learning for Text Classification with Information Disentanglement Based Regularization [J].
Huang, Yufan ;
Zhang, Yanzhe ;
Chen, Jiaao ;
Wang, Xuezhi ;
Yang, Diyi .
2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, :2736-2746
[37]   Event Detection Based on Generative Adversarial Learning [J].
Zhang, Meng ;
Xie, Zhiwen ;
Liu, Jin .
2022 IEEE/WIC/ACM INTERNATIONAL JOINT CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WI-IAT, 2022, :756-760
[38]   FIXED DESIGN ANALYSIS OF REGULARIZATION-BASED CONTINUAL LEARNING [J].
Li, Haoran ;
Wu, Jingfeng ;
Braverman, Vladimir .
CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 :513-533
[39]   Quantum Continual Learning Overcoming Catastrophic Forgetting [J].
Jiang, Wenjie ;
Lu, Zhide ;
Deng, Dong-Ling .
CHINESE PHYSICS LETTERS, 2022, 39 (05)
[40]   Quantum Continual Learning Overcoming Catastrophic Forgetting [J].
蒋文杰 ;
鲁智徳 ;
邓东灵 .
Chinese Physics Letters, 2022, 39 (05) :29-41