Learn to Forget: Machine Unlearning via Neuron Masking

被引:21
作者
Ma, Zhuo [1 ]
Liu, Yang [1 ]
Liu, Ximeng [2 ,3 ]
Liu, Jian [4 ]
Ma, Jianfeng [1 ]
Ren, Kui [4 ]
机构
[1] Xidian Univ, Sch Cyber Engn, Xian 710126, Peoples R China
[2] Fuzhou Univ, Coll Math & Comp Sci, Fuzhou 350025, Peoples R China
[3] Peng Cheng Lab, Cyberspace Secur Res Ctr, Shenzhen 518066, Peoples R China
[4] Zhejiang Univ, Inst Cyberspace Res, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine unlearning; neuron masking; neural network; SYSTEMS;
D O I
10.1109/TDSC.2022.3194884
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, machine learning models, especially neural networks, have became prevalent in many real-world applications. These models are trained based on a one-way trip from user data: as long as users contribute their data, there is no way to withdraw. To this end, machine unlearning becomes a popular research topic, which allows the model trainer to unlearn unexpected data from a trained machine learning model. In this article, we propose the first uniform metric called forgetting rate to measure the effectiveness of a machine unlearning method. It is based on the concept of membership inference and describes the transformation rate of the eliminated data from "memorized" to "unknown" after conducting unlearning. We also propose a novel unlearning method called Forsaken. It is superior to previous work in either utility or efficiency (when achieving the same forgetting rate). We benchmark Forsaken with eight standard datasets to evaluate its performance. The experimental results show that it can achieve more than 90% forgetting rate on average and only causeless than 5% accuracy loss.
引用
收藏
页码:3194 / 3207
页数:14
相关论文
共 48 条
[1]   QUOTIENT: Two-Party Secure Neural Network Training and Prediction [J].
Agrawal, Nitin ;
Shamsabadi, Ali Shahin ;
Kusner, Matt J. ;
Gascon, Adria .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :1231-1247
[2]  
[Anonymous], 1993, Comput. Linguist, DOI DOI 10.21236/ADA273556
[3]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[4]   A Proof of Local Convergence for the Adam Optimizer [J].
Bock, Sebastian ;
Weiss, Martin .
2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
[5]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[6]  
Bourtoule L, 2021, P IEEE S SECUR PRIV, P141, DOI 10.1109/SP40001.2021.00019
[7]  
Brophy J, 2021, PR MACH LEARN RES, V139
[8]   Towards Making Systems Forget with Machine Unlearning [J].
Cao, Yinzhi ;
Yang, Junfeng .
2015 IEEE SYMPOSIUM ON SECURITY AND PRIVACY SP 2015, 2015, :463-480
[9]  
Carlini N, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P267
[10]   When Machine Unlearning Jeopardizes Privacy [J].
Chen, Min ;
Zhang, Zhikun ;
Wang, Tianhao ;
Backes, Michael ;
Humbert, Mathias ;
Zhang, Yang .
CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, :896-911