Machine Unlearning: Solutions and Challenges

被引:5
作者
Xu, Jie [1 ]
Wu, Zihan [2 ]
Wang, Cong [1 ]
Jia, Xiaohua [1 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[2] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2024年 / 8卷 / 03期
关键词
Machine unlearning; machine learning security; the right to be forgotten;
D O I
10.1109/TETCI.2024.3379240
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation. To address these issues, machine unlearning has emerged as a critical technique to selectively remove specific training data points' influence on trained models. This paper provides a comprehensive taxonomy and analysis of the solutions in machine unlearning. We categorize existing solutions into exact unlearning approaches that remove data influence thoroughly and approximate unlearning approaches that efficiently minimize data influence. By comprehensively reviewing solutions, we identify and discuss their strengths and limitations. Furthermore, we propose future directions to advance machine unlearning and establish it as an essential capability for trustworthy and adaptive machine learning models. This paper provides researchers with a roadmap of open problems, encouraging impactful contributions to address real-world needs for selective data removal.
引用
收藏
页码:2150 / 2168
页数:19
相关论文
共 114 条
  • [1] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [2] Five Years of the Right to be Forgotten
    Bertram, Theo
    Bursztein, Elie
    Caro, Stephanie
    Chao, Hubert
    Feman, Rutledge Chin
    Fleischer, Peter
    Gustafsson, Albin
    Hemerly, Jess
    Hibbert, Chris
    Invernizzi, Luca
    Donnelly, Lanah Kammourieh
    Ketover, Jason
    Laefer, Jay
    Nicholas, Paul
    Niu, Yuan
    Obhi, Harjinder
    Price, David
    Strait, Andrew
    Thomas, Kurt
    Al Verney
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 959 - 972
  • [3] Biggio B., 2012, P 29 INT C MACHINE L, V2, P1807, DOI 10.48550/arxiv.1206.6389
  • [4] Bojchevski Aleksandar, 2018, 6 INT C LEARNING REP
  • [5] Bourtoule L, 2021, P IEEE S SECUR PRIV, P141, DOI 10.1109/SP40001.2021.00019
  • [6] Brophy Jonathan, 2021, P INT C MACHINE LEAR, P1092
  • [7] REPRESENTATIONS OF QUASI-NEWTON MATRICES AND THEIR USE IN LIMITED MEMORY METHODS
    BYRD, RH
    NOCEDAL, J
    SCHNABEL, RB
    [J]. MATHEMATICAL PROGRAMMING, 1994, 63 (02) : 129 - 156
  • [8] FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information
    Cao, Xiaoyu
    Jia, Jinyuan
    Zhang, Zaixi
    Gong, Neil Zhenqiang
    [J]. 2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1366 - 1383
  • [9] Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning
    Cao, Yinzhi
    Yu, Alexander Fangxiao
    Aday, Andrew
    Stahl, Eric
    Merwine, Jon
    Yang, Junfeng
    [J]. PROCEEDINGS OF THE 2018 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (ASIACCS'18), 2018, : 735 - 747
  • [10] Towards Making Systems Forget with Machine Unlearning
    Cao, Yinzhi
    Yang, Junfeng
    [J]. 2015 IEEE SYMPOSIUM ON SECURITY AND PRIVACY SP 2015, 2015, : 463 - 480