A diversity-aware incentive mechanism for cross-silo federated learning with budget constraint

被引:0
作者
Wu, Xiaohong [1 ]
Lin, Yujun [1 ]
Zhong, Haotian [1 ]
Tao, Jie [1 ]
Gu, Yonggen [1 ]
Shen, Shigen [1 ]
Yu, Shui [2 ]
机构
[1] Huzhou Univ, Sch Informat Engn, Huzhou 313000, Zhejiang, Peoples R China
[2] Univ Technol Sydney, Sch Comp Sci, Ultimo, NSW 2007, Australia
关键词
Federated learning; Auction; Client selection; Data diversity; Data scale;
D O I
10.1016/j.knosys.2025.113212
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) relies on a sufficient number of clients to utilize their local data for model training, making the incentive mechanism a crucial component for its success. Given the significant impact of the training dataset on FL performance, we introduce two key metrics: the scale and diversity of the training data. These factors are vital for improving model accuracy. However, designing an incentive mechanism that accounts for both the scale and diversity of data in FL is a challenge task. To address this, we propose an auction-based method for multi-dimensional objectives, called the diversity-aware incentive mechanism (DAIM). We prove that the DAIM satisfies three important properties: truthfulness, individual rationality and budget feasibility. Under this mechanism, clients are incentivized to truthfully report the size, distribution and cost of their local datasets, with selected clients contributing all or part of their data to federated model training. Experimental results show that, especially when the budget is limited, DAIM outperforms existing methods that ignore data diversity, particularly when there is significant variation in dataset distributions across clients.
引用
收藏
页数:13
相关论文
共 45 条
[1]   How to Incentivize Data-Driven Collaboration Among Competing Parties [J].
Azar, Pablo Daniel ;
Goldwasser, Shafi ;
Park, Sunoo .
ITCS'16: PROCEEDINGS OF THE 2016 ACM CONFERENCE ON INNOVATIONS IN THEORETICAL COMPUTER SCIENCE, 2016, :213-225
[2]   Truthful multi-unit procurements with budgets [J].
Chan, Hau, 1600, Springer Verlag (8877) :89-105
[3]   Matching-Theory-Based Low-Latency Scheme for Multitask Federated Learning in MEC Networks [J].
Chen, Dawei ;
Hong, Choong Seon ;
Wang, Li ;
Zha, Yiyong ;
Zhang, Yunfei ;
Liu, Xin ;
Han, Zhu .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (14) :11415-11426
[4]   A Mechanism Design Approach for Multi-party Machine Learning [J].
Chen, Mengjing ;
Liu, Yang ;
Shen, Weiran ;
Shen, Yiheng ;
Tang, Pingzhong ;
Yang, Qiang .
FRONTIERS OF ALGORITHMIC WISDOM, IJTCS-FAW 2022, 2022, 13461 :248-268
[5]  
Chen N, 2011, PROCEEDINGS OF THE TWENTY-SECOND ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, P685
[6]  
Cho YJ, 2022, PR MACH LEARN RES, V151
[7]  
Gu BH, 2001, LECT NOTES COMPUT SC, V2118, P317
[8]   Privacy-Preserving Asynchronous Vertical Federated Learning Algorithms for Multiparty Collaborative Learning [J].
Gu, Bin ;
Xu, An ;
Huo, Zhouyuan ;
Deng, Cheng ;
Huang, Heng .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) :6103-6115
[9]   Multi-party collaborative drug discovery via federated learning [J].
Huang D. ;
Ye X. ;
Sakurai T. .
Computers in Biology and Medicine, 2024, 171
[10]   Toward an Automated Auction Framework for Wireless Federated Learning Services Market [J].
Jiao, Yutao ;
Wang, Ping ;
Niyato, Dusit ;
Lin, Bin ;
Kim, Dong In .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2021, 20 (10) :3034-3048