Contribution Matching-Based Hierarchical Incentive Mechanism Design for Crowd Federated Learning

被引:2
作者
Zhang, Hangjian [1 ]
Jin, Yanan [2 ]
Lu, Jianfeng [1 ,3 ]
Cao, Shuqin [3 ]
Dai, Qing
Yang, Shasha [1 ]
机构
[1] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua 321004, Peoples R China
[2] Hubei Univ Econ, Sch Informat Management & Stat, Wuhan 430205, Peoples R China
[3] Wuhan Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430065, Peoples R China
基金
中国国家自然科学基金;
关键词
Crowd intelligence; federated learning (FL); incentive mechanism; contract theory; Shapley value;
D O I
10.1109/ACCESS.2024.3365547
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the growing public attention to data privacy protection, the problem of data silos has been exacerbated, which makes it more difficult for crowd intelligence technologies to get off the ground. Meanwhile, Federated Learning (FL) has received great attention for its ability to break data silos and jointly build machine learning models. To crack the data silo problem in crowd intelligence, we propose a new Crowd Federated Learning (CFL) framework, which is a two-tier architecture consisting of a cloud server, model owners, and data collectors, that enables collaborative model training among individuals without the need for raw data interaction. However, existing work struggles to simultaneously ensure the balance of incentives among data collectors, model owners, and cloud server, which can affect the willingness of sharing and collaboration among subjects. To solve the above problem, we propose a hierarchical incentive mechanism named FedCom, i.e., Crowd Federated Learning for Contribution matching, to match participants' contributions with rewards. We theoretically prove that FedCom has contribution matching fairness, and conduct extensive comparative experiments with five baselines on one simulated dataset and four real-world datasets. Experimental results validate that FedCom is able to reduce the computation time of contribution evaluation by about 8 times and improve the global model performance by about 2% while ensuring fairness.
引用
收藏
页码:24735 / 24750
页数:16
相关论文
共 43 条
[1]  
Bolton M. P., 2005, Contract Theory
[2]   Earnings, productivity, and performance-related pay [J].
Booth, AL ;
Frank, J .
JOURNAL OF LABOR ECONOMICS, 1999, 17 (03) :447-463
[3]  
Cong G., 2007, Proceedings of the 33rd International Conference on Very Large Data Bases
[4]  
Cui S, 2021, ADV NEUR IN, V34
[5]  
Diamond S, 2016, J MACH LEARN RES, V17
[6]   Optimal Contract Design for Efficient Federated Learning With Multi-Dimensional Private Information [J].
Ding, Ningning ;
Fang, Zhixuan ;
Huang, Jianwei .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (01) :186-200
[7]  
Dwork C., 2012, P 3 INN THEOR COMP S, DOI [10.1145/2090236.2090255, DOI 10.1145/2090236.2090255]
[8]   Improving Fairness for Data Valuation in Horizontal Federated Learning [J].
Fan, Zhenan ;
Fang, Huang ;
Zhou, Zirui ;
Pei, Jian ;
Friedlander, Michael P. ;
Liu, Changxin ;
Zhang, Yong .
2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, :2440-2453
[9]  
Fraboni Y, 2021, PR MACH LEARN RES, V130
[10]   Spectrum Trading in Cognitive Radio Networks: A Contract-Theoretic Modeling Approach [J].
Gao, Lin ;
Wang, Xinbing ;
Xu, Youyun ;
Zhang, Qian .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2011, 29 (04) :843-855