Machine Unlearning by Reversing the Continual Learning

被引:2
作者
Zhang, Yongjing [1 ]
Lu, Zhaobo [2 ]
Zhang, Feng [1 ]
Wang, Hao [1 ]
Li, Shaojing [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Qufu Normal Univ, Sch Comp Sci, Rizhao 276826, Peoples R China
[3] Qingdao Agr Univ, Coll Sci & Informat, Qingdao 266109, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 16期
关键词
machine unlearning; continual learning; elastic weight consolidation; decreasing moment matching;
D O I
10.3390/app13169341
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Recent legislations, such as the European General Data Protection Regulation (GDPR), require user data holders to guarantee the individual's right to be forgotten. This means that user data holders must completely delete user data upon request. However, in the field of machine learning, it is not possible to simply remove these data from the back-end database wherein the training dataset is stored, because the machine learning model still retains this data information. Retraining the model using a dataset with these data removed can overcome this problem; however, this can lead to expensive computational overheads. In order to remedy this shortcoming, we propose two effective methods to help model owners or data holders remove private data from a trained model. The first method uses an elastic weight consolidation (EWC) constraint term and a modified loss function to neutralize the data to be removed. The second method approximates the posterior distribution of the model as a Gaussian distribution, and the model after unlearning is computed by decreasingly matching the moment (DMM) of the posterior distribution of the neural network trained on all data and the data to be removed. Finally, we conducted experiments on three standard datasets using backdoor attacks as the evaluation metric. The results show that both methods are effective in removing backdoor triggers in deep learning models. Specifically, EWC can reduce the success rate of backdoor attacks to 0. IMM can ensure that the model prediction accuracy is higher than 80% and keep the success rate of backdoor attacks below 10%.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Defending against gradient inversion attacks in federated learning via statistical machine unlearning
    Gao, Kun
    Zhu, Tianqing
    Ye, Dayong
    Zhou, Wanlei
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 299
  • [22] Zero-Shot Machine Unlearning
    Chundawat, Vikram S.
    Tarun, Ayush K.
    Mandal, Murari
    Kankanhalli, Mohan
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2345 - 2354
  • [23] Fast Yet Effective Machine Unlearning
    Tarun, Ayush K.
    Chundawat, Vikram S.
    Mandal, Murari
    Kankanhalli, Mohan
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 13046 - 13055
  • [24] When Machine Unlearning Jeopardizes Privacy
    Chen, Min
    Zhang, Zhikun
    Wang, Tianhao
    Backes, Michael
    Humbert, Mathias
    Zhang, Yang
    [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 896 - 911
  • [25] Continual compression model for online continual learning
    Ye, Fei
    Bors, Adrian G.
    [J]. APPLIED SOFT COMPUTING, 2024, 167
  • [26] Machine Unlearning Method Based On Projection Residual
    Cao, Zihao
    Wang, Jianzong
    Si, Shijing
    Huang, Zhangcheng
    Xiao, Jing
    [J]. 2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 270 - 277
  • [27] Logarithmic Continual Learning
    Masarczyk, Wojciech
    Wawrzynski, Pawel
    Marczak, Daniel
    Deja, Kamil
    Trzcinski, Tomasz
    [J]. IEEE ACCESS, 2022, 10 : 117001 - 117010
  • [28] Machine Unlearning in Gradient Boosting Decision Trees
    Lin, Huawei
    Chung, Jun Woo
    Lao, Yingjie
    Zhao, Weijie
    [J]. PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 1374 - 1383
  • [29] Privacy preserving machine unlearning for smart cities
    Chen, Kongyang
    Huang, Yao
    Wang, Yiwen
    Zhang, Xiaoxue
    Mi, Bing
    Wang, Yu
    [J]. ANNALS OF TELECOMMUNICATIONS, 2024, 79 (1-2) : 61 - 72
  • [30] Supporting Trustworthy AI Through Machine Unlearning
    Hine, Emmie
    Novelli, Claudio
    Taddeo, Mariarosaria
    Floridi, Luciano
    [J]. SCIENCE AND ENGINEERING ETHICS, 2024, 30 (05)