VeriFi: Towards Verifiable Federated Unlearning

被引:4
作者
Gao, Xiangshan [1 ]
Ma, Xingjun [2 ]
Wang, Jingyi [1 ]
Sun, Youcheng [3 ]
Li, Bo [4 ]
Ji, Shouling [1 ]
Cheng, Peng [1 ]
Chen, Jiming [1 ,5 ]
机构
[1] Zhejiang Univ, Hangzhou 310027, Peoples R China
[2] Fudan Univ, Shanghai 200437, Peoples R China
[3] Univ Manchester, Manchester M13 9PL, England
[4] Univ Illinois Urbana Champaign UIUC, Champaign, IL 61820 USA
[5] Hangzhou Dianzi Univ, Hangzhou 310005, Peoples R China
基金
美国国家科学基金会;
关键词
Data models; Servers; Training; Federated learning; Systematics; Recurrent neural networks; Task analysis; Federated learning (FL); unlearning; verification; right to be forgotten;
D O I
10.1109/TDSC.2024.3382321
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) has emerged as a privacy-aware collaborative learning paradigm where participants jointly train a powerful model without sharing their private data. One desirable property for FL is the implementation of the right to be forgotten (RTBF) , i.e., a leaving participant has the right to request the deletion of its private data from the global model. However, unlearning itself may not be enough to implement RTBF unless the unlearning effect can be independently verified , an important aspect that has been overlooked in the current literature. Unlearning verification is particularly challenging in FL as the unlearning effect on one participant's data could be canceled by the contribution of other participants. In this work, we prompt the concept of verifiable federated unlearning and propose VeriFi , a unified framework that allows systematic analysis of federated unlearning and quantification of its effect, with different combinations of various unlearning and verification methods. In VeriFi , the leaving participant is granted the right to verify (RTV) to actively verify the unlearning effect in the next few rounds immediately after notifying the server of its intention to leave, along with local verification done through two steps: 1) marking that fingerprints the leaving participant by specially-designed markers and 2) checking that examines the global model's performance change on the markers. Based on VeriFi , we have conducted so far the most systematic study on verifiable federated unlearning, covering six unlearning methods and five verification methods. Our study sheds light on the existing drawbacks and potential alternatives for both unlearning and verification methods. During the study, we also propose a more efficient and FL-friendly unlearning method (u)S2U, and two more effective and robust non-invasive (without training controllability, external data, white-box model access nor introducing new security risks) verification methods (FM)-F-v and (EM)-E-v. While the proposed methods may not be a panacea for all the challenges, they address several key drawbacks of existing methods and represent a promising step toward effective, efficient, robust, and more importantly, non-invasive federated unlearning and verification. We extensively evaluate VeriFi on seven datasets, including natural/facial/medical images and audios, and four types of deep learning models, including both Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). We hope, such an extensive and holistic experimental evaluation, although admittedly complex and challenging, could help establish important empirical understandings, evidence, and insights for trustworthy federated unlearning.
引用
收藏
页码:5720 / 5736
页数:17
相关论文
共 61 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Arpit D, 2017, PR MACH LEARN RES, V70
[3]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[4]  
Blanchard P, 2017, ADV NEUR IN, V30
[5]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[6]  
Bonawitz Keith, 2019, SysML 2019, DOI DOI 10.48550/ARXIV.1902.01046
[7]  
Bourtoule L, 2021, P IEEE S SECUR PRIV, P141, DOI 10.1109/SP40001.2021.00019
[8]   Jumping NLP Curves: A Review of Natural Language Processing Research [J].
Cambria, Erik ;
White, Bebo .
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2014, 9 (02) :48-57
[9]  
Cao XY, 2020, Arxiv, DOI arXiv:1910.12903
[10]   Towards Making Systems Forget with Machine Unlearning [J].
Cao, Yinzhi ;
Yang, Junfeng .
2015 IEEE SYMPOSIUM ON SECURITY AND PRIVACY SP 2015, 2015, :463-480