Machine Unlearning via Representation Forgetting With Parameter Self-Sharing

被引:3
作者
Wang, Weiqi [1 ]
Zhang, Chenhan [1 ]
Tian, Zhiyi [1 ]
Yu, Shui [1 ]
机构
[1] Univ Technol Sydney, Sch Comp Sci, Ultimo, NSW 2007, Australia
关键词
Data models; Training; Degradation; Optimization; Computational modeling; Mutual information; Task analysis; Machine unlearning; representation forgetting; multi-objective optimization; machine learning; OPTIMIZATION;
D O I
10.1109/TIFS.2023.3331239
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Machine unlearning enables data owners to remove the contribution of their specified samples from trained models. However, existing methods fail to strike an optimal balance between erasure effectiveness and model utility preservation. Previous studies focused on removing the impact of user-specified data from the model as much as possible to implement unlearning. These methods usually result in significant model utility degradation, commonly called catastrophic unlearning. To address the issue, we systematically consider machine unlearning and formulate it as a two-objective optimization problem that involves forgetting the erased data and retaining the previously learned knowledge, highlighting accuracy preservation during the unlearning process. We propose an unlearning method called representation-forgetting unlearning with parameter self-sharing (RFU-SS) to achieve the two-objective unlearning goal. Firstly, we design a representation-forgetting unlearning (RFU) method that aims to remove the contribution of specified samples from a trained representation by minimizing the mutual information between the representation and the erased data. The representation is learned using the information bottleneck (IB) method. RFU is tailored to the IB structure models for ease of introduction. Secondly, we customize a parameter self-sharing structural optimization method for RFU (i.e., RFU-SS) to simultaneously optimize the forgetting and retention objectives to find the optimal balance. Extensive experimental results demonstrate a significant effectiveness improvement of RFU-SS over the state-of-the-art methods. RFU-SS almost eliminates catastrophic unlearning, reducing model accuracy degradation from over 6% to less than 0.2% on the MNIST dataset with an even better removal effect. The source code is available at https://github.com/wwq5-code/RFU-SS.git.
引用
收藏
页码:1099 / 1111
页数:13
相关论文
共 45 条
  • [1] Alemi AA, 2019, Arxiv, DOI arXiv:1612.00410
  • [2] Information Dropout: Learning Optimal Representations Through Noisy Computation
    Achille, Alessandro
    Soatto, Stefano
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) : 2897 - 2905
  • [3] Bang S, 2021, AAAI CONF ARTIF INTE, V35, P11396
  • [4] Belghazi MI, 2018, PR MACH LEARN RES, V80
  • [6] Bourtoule L, 2021, P IEEE S SECUR PRIV, P141, DOI 10.1109/SP40001.2021.00019
  • [7] Brophy J., 2021, INT C MACHINE LEARNI, P1092
  • [8] Towards Making Systems Forget with Machine Unlearning
    Cao, Yinzhi
    Yang, Junfeng
    [J]. 2015 IEEE SYMPOSIUM ON SECURITY AND PRIVACY SP 2015, 2015, : 463 - 480
  • [9] Recommendation Unlearning
    Chen, Chong
    Sun, Fei
    Zhang, Min
    Ding, Bolin
    [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 2768 - 2777
  • [10] Multiple-gradient descent algorithm (MGDA) for multiobjective optimization
    Desideri, Jean-Antoine
    [J]. COMPTES RENDUS MATHEMATIQUE, 2012, 350 (5-6) : 313 - 318