FIFL: A Fair Incentive Mechanism for Federated Learning

被引:24
作者
Gao, Liang [1 ]
Li, Li [2 ]
Chen, Yingwen [1 ]
Zheng, Wenli [3 ]
Xu, ChengZhong [2 ]
Xu, Ming [1 ]
机构
[1] Natl Univ Def Technol, Changsha, Peoples R China
[2] Univ Macau, State Key Lab IoTSC, Macau, Peoples R China
[3] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
来源
50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING | 2021年
关键词
federated learning; incentive mechanism; attack detection; REPUTATION; TRUST;
D O I
10.1145/3472456.3472469
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning is a novel machine learning framework that enables multiple devices to collaboratively train high-performance models while preserving data privacy. Federated learning is a kind of crowdsourcing computing, where a task publisher shares profit with workers to utilize their data and computing resources. Intuitively, devices have no interest to participate in training without rewards that match their expended resources. In addition, guarding against malicious workers is also essential because they may upload meaningless updates to get undeserving rewards or damage the global model. In order to effectively solve these problems, we propose FIFL, a fair incentive mechanism for federated learning. FIFL rewards workers fairly to attract reliable and efficient ones while punishing and eliminating the malicious ones based on a dynamic real-time worker assessment mechanism. We evaluate the effectiveness of FIFL through theoretical analysis and comprehensive experiments. The evaluation results show that FIFL fairly distributes rewards according to workers' behaviour and quality. FIFL increases the system revenue by 0.2% to 3.4% in reliable federations compared with baselines. In the unreliable scenario containing attackers which destroy the model's performance, the system revenue of FIFL outperforms the baselines by more than 46.7%.
引用
收藏
页数:10
相关论文
共 34 条
[1]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[2]  
Baruch M, 2019, ADV NEUR IN, V32
[3]  
Blanchard P, 2017, ADV NEUR IN, V30
[4]   Trust Assessment in Vehicular Social Network Based on Three-Valued Subjective Logic [J].
Cheng, Tong ;
Liu, Guangchi ;
Yang, Qing ;
Sun, Jianguo .
IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (03) :652-663
[5]  
Deng Y., 2020, Advances in neural information processing systems, V33, P111
[6]  
El Mhamdi El Mahdi, 2018, HIDDEN VULNERABILITY, P13
[7]   Model checking of robustness properties in trust and reputation systems [J].
Ghasempouri, Seyed Asgary ;
Ladani, Behrouz Tork .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 108 :302-319
[8]  
Gollapudi S., 2017, ESA
[9]   Subjective Logic Based Approach to Modeling Default Reasoning for Visual Surveillance [J].
Han, Seunghan ;
Koo, Bonjung ;
Stechele, Walter .
2010 IEEE FOURTH INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING (ICSC 2010), 2010, :112-119
[10]   A Fairness-aware Incentive Scheme for Federated Learning [J].
Yu, Han ;
Liu, Zelei ;
Liu, Yang ;
Chen, Tianjian ;
Cong, Mingshu ;
Weng, Xi ;
Niyato, Dusit ;
Yang, Qiang .
PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020, 2020, :393-399