Secure and Accurate Personalized Federated Learning With Similarity-Based Model Aggregation

被引:0
作者
Tan, Zhouyong [1 ,2 ]
Le, Junqing [1 ,2 ]
Yang, Fan [1 ,2 ]
Huang, Min [1 ,2 ,3 ]
Xiang, Tao [1 ,2 ]
Liao, Xiaofeng [1 ,2 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
[2] Chongqing Univ, Key Lab Dependable Serv Comp Cyber Phys Soc, Minist Educ, Chongqing 400044, Peoples R China
[3] Chongqing Educ Evaluat Inst, Chongqing 400020, Peoples R China
来源
IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING | 2025年 / 10卷 / 01期
基金
国家重点研发计划; 中国国家自然科学基金; 中国博士后科学基金;
关键词
Computational modeling; Data models; Predictive models; Servers; Privacy; Adaptation models; Federated learning; Personalized federated learning; privacy protection; secure aggregation; similarity metric;
D O I
10.1109/TSUSC.2024.3403427
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Personalized federated learning (PFL) combines client needs and data characteristics to train personalized models for local clients. However, the most of previous PFL schemes encountered challenges such as low model prediction accuracy and privacy leakage when applied to practical datasets. Besides, the existing privacy protection methods fail to achieve satisfactory results in terms of model prediction accuracy and security simultaneously. In this paper, we propose a Privacy-preserving Personalized Federated Learning under Secure Multi-party Computation (SMC-PPFL), which can preserve privacy while obtaining a local personalized model with high prediction accuracy. In SMC-PPFL, noise perturbation is utilized to protect similarity computation, and secure multi-party computation is employed for model sub-aggregations. This combination ensures that clients' privacy is preserved, and the computed values remain unbiased without compromising security. Then, we propose a weighted sub-aggregation strategy based on the similarity of clients and introduce a regularization term in the local training to improve prediction accuracy. Finally, we evaluate the performance of SMC-PPFL on three common datasets. The experimental results show that SMC-PPFL achieves 2%similar to 15% higher prediction accuracy compared to the previous PFL schemes. Besides, the security analysis also verifies that SMC-PPFL can resist model inversion attacks and membership inference attacks.
引用
收藏
页码:132 / 145
页数:14
相关论文
共 50 条
[1]   An efficient approach for privacy preserving decentralized deep learning models based on secure multi-party computation [J].
Anh-Tu Tran ;
The-Dung Luong ;
Karnjana, Jessada ;
Van-Nam Huynh .
NEUROCOMPUTING, 2021, 422 :245-262
[2]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[3]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[4]  
Bonawitz K., 2019, Proc. Mach. Learn. Res, V1, P374
[5]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[6]  
Brendan McMahan H., 2018, INT C LEARNING REPRE
[7]   Differentially Private Secure Multi-Party Computation for Federated Learning in Financial Applications [J].
Byrd, David ;
Polychroniadou, Antigoni .
FIRST ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, ICAIF 2020, 2020,
[8]  
Diao E., 2021, P INT C LEARN REPR
[9]  
Dinh CT, 2020, ADV NEUR IN, V33
[10]   Towards Multiple Black-boxes Attack via Adversarial Example Generation Network [J].
Duan, Mingxing ;
Li, Kenli ;
Xie, Lingxi ;
Tian, Qi ;
Xiao, Bin .
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, :264-272