Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

被引:1
|
作者
Umer, Muhammad [1 ]
Polikar, Robi [1 ]
机构
[1] Rowan Univ, Dept Elect & Comp Engn, Glassboro, NJ 08028 USA
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
关键词
D O I
10.1109/IJCNN52387.2021.9533400
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (or "incremental") learning approaches are employed when additional knowledge or tasks need to be learned from subsequent batches or from streaming data. However these approaches are typically adversary agnostic, i.e., they do not consider the possibility of a malicious attack. In our prior work, we explored the vulnerabilities of Elastic Weight Consolidation (EWC) to the perceptible misinformation. We now explore the vulnerabilities of other regularization-based as well as generative replay-based continual learning algorithms, and also extend the attack to imperceptible misinformation. We show that an intelligent adversary can take advantage of a continual learning algorithm's capabilities of retaining existing knowledge over time, and force it to learn and retain deliberately introduced misinformation. To demonstrate this vulnerability, we inject backdoor attack samples into the training data. These attack samples constitute the misinformation, allowing the attacker to capture control of the model at test time. We evaluate the extent of this vulnerability on both rotated and split benchmark variants of the MNIST dataset under two important domain and class incremental learning scenarios. We show that the adversary can create a "false memory" about any task by inserting carefully-designed backdoor samples to the test instances of that task thereby controlling the amount of forgetting of any task of its choosing. Perhaps most importantly, we show this vulnerability to be very acute and damaging: the model memory can be easily compromised with the addition of backdoor samples into as little as 1% of the training data, even when the misinformation is imperceptible to human eye.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Preempting Catastrophic Forgetting in Continual Learning Models by Anticipatory Regularization
    El Khatib, Alaa
    Karray, Fakhri
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [2] Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
    Heng, Alvin
    Soh, Harold
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Continual learning with invertible generative models
    Pomponi, Jary
    Scardapane, Simone
    Uncini, Aurelio
    NEURAL NETWORKS, 2023, 164 : 606 - 616
  • [4] Generative Models from the perspective of Continual Learning
    Lesort, Timothee
    Caselles-Dupre, Hugo
    Garcia-Ortiz, Michael
    Stoian, Andrei
    Filliat, David
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [5] Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks
    Umer, Muhammad
    Dawson, Glenn
    Polikar, Robi
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [6] GopGAN: Gradients Orthogonal Projection Generative Adversarial Network With Continual Learning
    Li, Xiaobin
    Wang, Weiqiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (01) : 215 - 227
  • [7] Learning Universal Adversarial Perturbations with Generative Models
    Hayes, Jamie
    Danezis, George
    2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 43 - 49
  • [8] Evaluating and Explaining Generative Adversarial Networks for Continual Learning under Concept Drift
    Guzy, Filip
    Wozniak, Michal
    Krawczyk, Bartosz
    21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS ICDMW 2021, 2021, : 295 - 303
  • [9] Continual Activity Recognition with Generative Adversarial Networks
    Ye, Juan
    Nakwijit, Pakawat
    Schiemer, Martin
    Jha, Saurav
    Zambonelli, Franco
    ACM TRANSACTIONS ON INTERNET OF THINGS, 2021, 2 (02):
  • [10] Recall-Oriented Continual Learning with Generative Adversarial Meta-Model
    Kang, Haneol
    Choi, Dong-Wan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13040 - 13048