FedFAIM: A Model Performance-Based Fair Incentive Mechanism for Federated Learning

被引:15
作者
Shi, Zhuan [1 ]
Zhang, Lan [1 ]
Yao, Zhenyu [2 ]
Lyu, Lingjuan [3 ]
Chen, Cen [4 ]
Wang, Li [5 ]
Wang, Junhao [1 ]
Li, Xiang-Yang [1 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230027, Anhui, Peoples R China
[2] Univ Liverpool, Dept Math Sci, Liverpool L69 3BX, England
[3] Sony AI, Tokyo 1080075, Japan
[4] East China Normal Univ, Sch Data Sci & Engn, Shanghai 200000, Peoples R China
[5] Ant Financial, AI Dept, Hangzhou 310000, Zhejiang, Peoples R China
基金
国家重点研发计划;
关键词
Computational modeling; Resource management; Servers; Training; Collaborative work; Particle measurements; Atmospheric measurements; Federated learning; incentive mechanism; fairness; REPUTATION;
D O I
10.1109/TBDATA.2022.3183614
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) has emerged as a privacy-preserving distributed machine learning paradigm. To motivate data owners to contribute towards FL, research on FL incentive mechanisms is gaining great interest. Existing monetary incentive mechanisms generally share the same FL model with all participants regardless of their contributions. Such an assumption can be unfair towards participants who contributed more and promote undesirable free-riding, especially when the final model is of great utility value to participants. In this paper, we propose a Fairness-Aware Incentive Mechanism for federated learning (FedFAIM) to address such problem. It satisfies two types of fairness notion: 1) aggregation fairness, which determines aggregation results according to data quality; 2) reward fairness, which assigns each participant a unique model with performance reflecting his contribution. Aggregation fairness is achieved through efficient gradient aggregation which examines local gradient quality and aggregates them based on data quality. Reward fairness is achieved through an efficient Shapley value-based contribution assessment method and a novel reward allocation method based on reputation and distribution of local and global gradients. We further prove reward fairness is theoretically guaranteed. Extensive experiments show that FedFAIM provides stronger incentives than similar non-monetary FL incentive mechanisms while achieving a high level of fairness.
引用
收藏
页码:1038 / 1050
页数:13
相关论文
共 37 条
  • [11] Toward Secure Blockchain-Enabled Internet of Vehicles: Optimizing Consensus Management Using Reputation and Contract Theory
    Kang, Jiawen
    Xiong, Zehui
    Niyato, Dusit
    Ye, Dongdong
    Kim, Dong In
    Zhao, Jun
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (03) : 2906 - 2920
  • [12] Blockchain for Secure and Efficient Data Sharing in Vehicular Edge Computing and Networks
    Kang, Jiawen
    Yu, Rong
    Huang, Xumin
    Wu, Maoqiang
    Maharjan, Sabita
    Xie, Shengli
    Zhang, Yan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (03) : 4660 - 4670
  • [13] Krizhevsky A., 2009, LEARNING MULTIPLE LA
  • [14] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [15] Gradient-based learning applied to document recognition
    Lecun, Y
    Bottou, L
    Bengio, Y
    Haffner, P
    [J]. PROCEEDINGS OF THE IEEE, 1998, 86 (11) : 2278 - 2324
  • [16] LeCun Y., 1989, NEURIPS, V2
  • [17] A novel reputation computation model based on subjective logic for mobile ad hoc networks
    Liu, Yining
    Li, Keqiu
    Jin, Yingwei
    Zhang, Yong
    Qu, Wenyu
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2011, 27 (05): : 547 - 554
  • [18] Lyu L., 2020, Federated Learning-Privacy and Incentive, P189
  • [19] Magdziarczyk M., 2019, 6 INT MULT SCI C SOC, P177
  • [20] McMahan HB, 2017, PR MACH LEARN RES, V54, P1273