VeriFi: Towards Verifiable Federated Unlearning

被引:7
作者
Gao, Xiangshan [1 ]
Ma, Xingjun [2 ]
Wang, Jingyi [1 ]
Sun, Youcheng [3 ]
Li, Bo [4 ]
Ji, Shouling [1 ]
Cheng, Peng [1 ]
Chen, Jiming [1 ,5 ]
机构
[1] Zhejiang Univ, Hangzhou 310027, Peoples R China
[2] Fudan Univ, Shanghai 200437, Peoples R China
[3] Univ Manchester, Manchester M13 9PL, England
[4] Univ Illinois Urbana Champaign UIUC, Champaign, IL 61820 USA
[5] Hangzhou Dianzi Univ, Hangzhou 310005, Peoples R China
基金
美国国家科学基金会;
关键词
Data models; Servers; Training; Federated learning; Systematics; Recurrent neural networks; Task analysis; Federated learning (FL); unlearning; verification; right to be forgotten;
D O I
10.1109/TDSC.2024.3382321
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) has emerged as a privacy-aware collaborative learning paradigm where participants jointly train a powerful model without sharing their private data. One desirable property for FL is the implementation of the right to be forgotten (RTBF) , i.e., a leaving participant has the right to request the deletion of its private data from the global model. However, unlearning itself may not be enough to implement RTBF unless the unlearning effect can be independently verified , an important aspect that has been overlooked in the current literature. Unlearning verification is particularly challenging in FL as the unlearning effect on one participant's data could be canceled by the contribution of other participants. In this work, we prompt the concept of verifiable federated unlearning and propose VeriFi , a unified framework that allows systematic analysis of federated unlearning and quantification of its effect, with different combinations of various unlearning and verification methods. In VeriFi , the leaving participant is granted the right to verify (RTV) to actively verify the unlearning effect in the next few rounds immediately after notifying the server of its intention to leave, along with local verification done through two steps: 1) marking that fingerprints the leaving participant by specially-designed markers and 2) checking that examines the global model's performance change on the markers. Based on VeriFi , we have conducted so far the most systematic study on verifiable federated unlearning, covering six unlearning methods and five verification methods. Our study sheds light on the existing drawbacks and potential alternatives for both unlearning and verification methods. During the study, we also propose a more efficient and FL-friendly unlearning method (u)S2U, and two more effective and robust non-invasive (without training controllability, external data, white-box model access nor introducing new security risks) verification methods (FM)-F-v and (EM)-E-v. While the proposed methods may not be a panacea for all the challenges, they address several key drawbacks of existing methods and represent a promising step toward effective, efficient, robust, and more importantly, non-invasive federated unlearning and verification. We extensively evaluate VeriFi on seven datasets, including natural/facial/medical images and audios, and four types of deep learning models, including both Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). We hope, such an extensive and holistic experimental evaluation, although admittedly complex and challenging, could help establish important empirical understandings, evidence, and insights for trustworthy federated unlearning.
引用
收藏
页码:5720 / 5736
页数:17
相关论文
共 61 条
[21]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[22]  
GDPR, 2017, Right to erasure (right to be forgotten)
[23]  
Ginart AA, 2019, ADV NEUR IN, V32
[24]  
Goldberger J, 2003, NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS I AND II, PROCEEDINGS, P487
[25]  
Gu TY, 2019, Arxiv, DOI arXiv:1708.06733
[26]  
Guerraoui Rachid, 2018, PROC 35 INT C MACH L, P3521
[27]  
Harding E L., 2019, J Data Prot Priv, V2, P234
[28]  
He Y., 2021, arXiv
[29]   Reinforcement learning: A survey [J].
Kaelbling, LP ;
Littman, ML ;
Moore, AW .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 1996, 4 :237-285
[30]  
Kairouz P., 2019, arXiv, DOI DOI 10.48550/ARXIV.1912.04977