EPFFL: Enhancing Privacy and Fairness in Federated Learning for Distributed E-Healthcare Data Sharing Services

被引:0
作者
Liu, Jingwei [1 ,2 ]
Li, Yating [1 ,2 ]
Zhao, Mengjiao [1 ,2 ]
Liu, Lei [3 ]
Kumar, Neeraj [4 ]
机构
[1] State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Xidian Univ, Shaanxi Key Lab Blockchain & Secure Comp, Xian 710071, Peoples R China
[3] Xidian Univ, Guangzhou Inst Technol, Guangzhou 510555, Peoples R China
[4] Deemed Univ, Thapar Inst Engn & Technol, Dept Comp Sci Engn, Patiala 147004, India
基金
中国国家自然科学基金;
关键词
Medical services; Privacy; Training; Solid modeling; Computational modeling; Data models; Blockchains; Blockchain; e-healthcare services; federated learning; homomorphic encryption; medical data collaborative training;
D O I
10.1109/TDSC.2024.3431542
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) has made remarkable achievements in medical and e-healthcare services. Different healthcare institutions can jointly train models to facilitate intelligent diagnosis. However, the model gradients transmitted among these institutions may still leak private information about the local models and training datasets. Additionally, in the current FL schemes, institutions with different quantities or qualities of medical data usually get the same training models, which may significantly hamper their motivation. Therefore, ensuring privacy and fairness in collaborative training remains a challenge. To address this issue, we propose a privacy-enhanced and fair FL scheme (EPFFL) to support distributed large-scale data sharing of e-healthcare services. In the training process, participants upload the encrypted model gradients according to their sharing wishes to the blockchain while storing their training data locally. Hence, the FL initiator can only get the aggregated gradients from the blockchain rather than the local data of other participants. Moreover, EPFFL ensures fairness by evaluating the participants' contributions, i.e., participants with different data qualities and sharing levels can obtain the final models with different accuracies at the end of the training. Through theoretical and simulation analysis, the scheme shows superior functionalities on privacy preservation and fairness with the ideal model accuracy.
引用
收藏
页码:1239 / 1252
页数:14
相关论文
共 39 条
  • [1] Bitcoin NS, 2008, BITCOIN PEER TO PEER, DOI DOI 10.2139/SSRN.3440802
  • [2] Boenisch F, 2023, Arxiv, DOI arXiv:2112.02918
  • [3] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [4] Large-Scale Machine Learning with Stochastic Gradient Descent
    Bottou, Leon
    [J]. COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, : 177 - 186
  • [5] Chen V., 2019, arXiv
  • [6] Secure Collaborative Augmented Reality Framework for Biomedical Informatics
    Djenouri, Youcef
    Belhadi, Asma
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (06) : 2417 - 2424
  • [7] Dowlin N, 2016, PR MACH LEARN RES, V48
  • [8] Elkordy A. R., 2022, arXiv
  • [9] FGFL: A blockchain-based fair incentive governor for Federated Learning
    Gao, Liang
    Li, Li
    Chen, Yingwen
    Xu, ChengZhong
    Xu, Ming
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2022, 163 : 283 - 299
  • [10] SAFEFL: MPC-friendly Framework for Private and Robust Federated Learning
    Gehlhar, Till
    Marx, Felix
    Schneider, Thomas
    Suresh, Ajith
    Wehrle, Tobias
    Yalame, Hossein
    [J]. 2023 IEEE SECURITY AND PRIVACY WORKSHOPS, SPW, 2023, : 69 - 76