EPFFL: Enhancing Privacy and Fairness in Federated Learning for Distributed E-Healthcare Data Sharing Services

被引:0
作者
Liu, Jingwei [1 ,2 ]
Li, Yating [1 ,2 ]
Zhao, Mengjiao [1 ,2 ]
Liu, Lei [3 ]
Kumar, Neeraj [4 ]
机构
[1] State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Xidian Univ, Shaanxi Key Lab Blockchain & Secure Comp, Xian 710071, Peoples R China
[3] Xidian Univ, Guangzhou Inst Technol, Guangzhou 510555, Peoples R China
[4] Deemed Univ, Thapar Inst Engn & Technol, Dept Comp Sci Engn, Patiala 147004, India
基金
中国国家自然科学基金;
关键词
Medical services; Privacy; Training; Solid modeling; Computational modeling; Data models; Blockchains; Blockchain; e-healthcare services; federated learning; homomorphic encryption; medical data collaborative training;
D O I
10.1109/TDSC.2024.3431542
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) has made remarkable achievements in medical and e-healthcare services. Different healthcare institutions can jointly train models to facilitate intelligent diagnosis. However, the model gradients transmitted among these institutions may still leak private information about the local models and training datasets. Additionally, in the current FL schemes, institutions with different quantities or qualities of medical data usually get the same training models, which may significantly hamper their motivation. Therefore, ensuring privacy and fairness in collaborative training remains a challenge. To address this issue, we propose a privacy-enhanced and fair FL scheme (EPFFL) to support distributed large-scale data sharing of e-healthcare services. In the training process, participants upload the encrypted model gradients according to their sharing wishes to the blockchain while storing their training data locally. Hence, the FL initiator can only get the aggregated gradients from the blockchain rather than the local data of other participants. Moreover, EPFFL ensures fairness by evaluating the participants' contributions, i.e., participants with different data qualities and sharing levels can obtain the final models with different accuracies at the end of the training. Through theoretical and simulation analysis, the scheme shows superior functionalities on privacy preservation and fairness with the ideal model accuracy.
引用
收藏
页码:1239 / 1252
页数:14
相关论文
共 39 条
[11]  
Gollapudi S., 2017, P 25 ANN EUR S ALG, P1
[12]  
Huang H., 2021, P 28 ANN NETW DISTR, P2154
[13]  
Kilbertus Niki, 2018, P MACHINE LEARNING R, V80
[14]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[15]  
Lindner R, 2011, LECT NOTES COMPUT SC, V6558, P319, DOI 10.1007/978-3-642-19074-2_21
[16]   Towards Fair and Privacy-Preserving Federated Deep Models [J].
Lyu, Lingjuan ;
Yu, Jiangshan ;
Nandakumar, Karthik ;
Li, Yitong ;
Ma, Xingjun ;
Jin, Jiong ;
Yu, Han ;
Ng, Kee Siong .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2020, 31 (11) :2524-2541
[17]  
Mallah R. A., 2022, P 2022 INT C EL COMP, P1
[18]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[19]  
Melis L, 2018, Arxiv, DOI arXiv:1805.04049
[20]  
Mohassel P, 2017, P IEEE S SECUR PRIV, P19, DOI [10.1109/SP.2017.12, 10.1145/3132747.3132768]