Privacy-Preserving Federated Unlearning With Certified Client Removal

被引:0
作者
Liu, Ziyao [1 ]
Ye, Huanyi [2 ]
Jiang, Yu [2 ]
Shen, Jiyuan [2 ]
Guo, Jiale [1 ]
Tjuawinata, Ivan [3 ]
Lam, Kwok-Yan [4 ,5 ]
机构
[1] Nanyang Technol Univ, Digital Trust Ctr, Singapore, Singapore
[2] Nanyang Technol Univ, Coll Comp & Data Sci, Singapore, Singapore
[3] Nanyang Technol Univ, Strateg Ctr Res Privacy Preserving Technol & Syst, Singapore, Singapore
[4] Nanyang Technol Univ, Coll Comp & Data Sci, Singapore, Singapore
[5] Nanyang Technol Univ, Digital Trust Ctr, Singapore, Singapore
基金
新加坡国家研究基金会;
关键词
Servers; Cryptography; Training; Privacy; Data models; Data privacy; Protocols; Federated learning; Threat modeling; Systems architecture; Federated unlearning; secure multi-party computation; certified removal;
D O I
10.1109/TIFS.2025.3555868
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, Federated Unlearning (FU) has gained attention for addressing the removal of a client's influence from the global model in Federated Learning (FL) systems, thereby ensuring the "right to be forgotten" (RTBF). State-of-the-art methods for unlearning use historical data from FL clients, such as gradients or locally trained models. However, studies have revealed significant information leakage in this setting, with the possibility of reconstructing a user's local data from their uploaded information. Addressing this, we propose Starfish, a privacy-preserving federated unlearning scheme using Two-Party Computation (2PC) techniques and shared historical client data between two non-colluding servers. Starfish builds upon existing FU methods to ensure privacy in unlearning processes. To enhance the efficiency of privacy-preserving FU evaluations, we suggest 2PC-friendly alternatives for certain FU algorithm operations. We also implement strategies to reduce costs associated with 2PC operations and lessen cumulative approximation errors. Moreover, we establish a theoretical bound for the difference between the unlearned global model via Starfish and a global model retrained from scratch for certified client removal. Our theoretical and experimental analyses demonstrate that Starfish achieves effective unlearning with reasonable efficiency, maintaining privacy and security in FL systems.
引用
收藏
页码:3966 / 3978
页数:13
相关论文
共 43 条
[1]  
[Anonymous], 2018, General Data Protection Regulation (GDPR)
[2]  
Batcher K. E., 1968, P APR 30 MAY 2 1968, P307, DOI DOI 10.1145/1468075.1468121
[3]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[4]   REPRESENTATIONS OF QUASI-NEWTON MATRICES AND THEIR USE IN LIMITED MEMORY METHODS [J].
BYRD, RH ;
NOCEDAL, J ;
SCHNABEL, RB .
MATHEMATICAL PROGRAMMING, 1994, 63 (02) :129-156
[5]   FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information [J].
Cao, Xiaoyu ;
Jia, Jinyuan ;
Zhang, Zaixi ;
Gong, Neil Zhenqiang .
2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, :1366-1383
[6]   FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping [J].
Cao, Xiaoyu ;
Fang, Minghong ;
Liu, Jia ;
Gong, Neil Zhenqiang .
28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
[7]  
Catrina O, 2018, IEEE ICC, P431, DOI 10.1109/ICComm.2018.8453648
[8]  
Che Tianshi, P MACHINE LEARNING R
[9]  
Cramer R, 2015, Secure multiparty computation and secret sharing, DOI 10.1017/CBO9781107337756
[10]   New Primitives for Actively-Secure MPC over Rings with Applications to Private Machine Learning [J].
Damgard, Ivan ;
Escudero, Daniel ;
Frederiksen, Tore ;
Keller, Marcel ;
Scholl, Peter ;
Volgushev, Nikolaj .
2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, :1102-1120