Contribution Measurement in Privacy-Preserving Federated Learning

被引:0
|
作者
Hsu, Ruei-hau [1 ]
Yu, Yi-an [1 ]
Su, Hsuan-cheng [1 ]
机构
[1] Natl Sun Yat Sen Univ, Informat Secur Res Ctr, Dept Comp Sci & Engn, Kaohsiung 804, Taiwan
关键词
privacy protection; federated learning; contribution measurement; Shapley va- lue; homomorphic encryption; fairness; verifiability;
D O I
10.6688/JISE.20241140(6).0002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a novel decentralized machine learning framework that differs from traditional centralized machine learning. It enables multiple participants to collaborate on training models without sharing raw data directly. Participants train the local model with their data and only upload the model parameters. To achieve a fair allocation of benefits by the common global model, a standard is needed to evaluate each model providers' contribution to FL. Shapley value is a classic concept from cooperative game theory and is often used in data evaluation for machine learning. This study introduces the Shapley value in privacy-preserving federated learning (PPFL) to construct a contribution measuring module for measuring the contribution of each model provider to the learning task and proposes a verification mechanism for the contribution results. Compared to the other related works for contribution measurement by Shapley value, this work achieves higher privacy protection, where local participants' data sets, local model parameters, and global model parameters are concealed. In addition, the verification of the fairness of contribution measurement is also supported. Moreover, this work achieves the access control of aggregated global models through the concept of threshold identity-based encryption, where model consumers can only gain access to the specific aggregated global model if they are authorized by sufficient model providers.
引用
收藏
页码:1173 / 1196
页数:24
相关论文
共 50 条
  • [21] PPeFL: Privacy-Preserving Edge Federated Learning With Local Differential Privacy
    Wang, Baocang
    Chen, Yange
    Jiang, Hang
    Zhao, Zhen
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (17) : 15488 - 15500
  • [22] CROWDFL: Privacy-Preserving Mobile Crowdsensing System Via Federated Learning
    Zhao, Bowen
    Liu, Ximeng
    Chen, Wei-Neng
    Deng, Robert H.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (08) : 4607 - 4619
  • [23] A privacy-preserving federated learning scheme with homomorphic encryption and edge computing
    Zhu, Bian
    Niu, Ling
    ALEXANDRIA ENGINEERING JOURNAL, 2025, 118 : 11 - 20
  • [24] Communication-Efficient and Privacy-Preserving Verifiable Aggregation for Federated Learning
    Peng, Kaixin
    Shen, Xiaoying
    Gao, Le
    Wang, Baocang
    Lu, Yichao
    ENTROPY, 2023, 25 (08)
  • [25] Verifiable Privacy-Preserving Federated Learning Under Multiple Encrypted Keys
    Shen, Xiaoying
    Luo, Xue
    Yuan, Feng
    Wang, Baocang
    Chen, Yange
    Tang, Dianhua
    Gao, Le
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (02) : 3430 - 3445
  • [26] Privacy-Preserving and Reliable Decentralized Federated Learning
    Gao, Yuanyuan
    Zhang, Lei
    Wang, Lulu
    Choo, Kim-Kwang Raymond
    Zhang, Rui
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (04) : 2879 - 2891
  • [27] Privacy-preserving federated learning on lattice quantization
    Zhang, Lingjie
    Zhang, Hai
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2023, 21 (06)
  • [28] AddShare: A Privacy-Preserving Approach for Federated Learning
    Asare, Bernard Atiemo
    Branco, Paula
    Kiringa, Iluju
    Yeap, Tet
    COMPUTER SECURITY. ESORICS 2023 INTERNATIONAL WORKSHOPS, PT I, 2024, 14398 : 299 - 309
  • [29] POSTER: Privacy-preserving Federated Active Learning
    Kurniawan, Hendra
    Mambo, Masahiro
    SCIENCE OF CYBER SECURITY, SCISEC 2022 WORKSHOPS, 2022, 1680 : 223 - 226
  • [30] PPFLV: privacy-preserving federated learning with verifiability
    Zhou, Qun
    Shen, Wenting
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (09): : 12727 - 12743