Incentivizing Federated Learning Under Long-Term Energy Constraint via Online Randomized Auctions

被引:19
作者
Yuan, Yulan [1 ]
Jiao, Lei [2 ]
Zhu, Konglin [1 ,3 ]
Zhang, Lin [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
[2] Univ Oregon, Dept Comp & Informat Sci, Eugene, OR 97403 USA
[3] Purple Mt Labs, Nanjing 210023, Peoples R China
基金
美国国家科学基金会;
关键词
Collaborative work; Computational modeling; Data models; Costs; Mobile handsets; Biological system modeling; Wireless communication; Federated learning; energy constraint; incentive mechanism; OPTIMIZATION APPROACH; NETWORKS; CONVERGENCE; MECHANISM; PRIVACY; DESIGN;
D O I
10.1109/TWC.2021.3137024
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Mobile users are often reluctant to participate in federated learning to train models, due to the excessive consumption of the limited resources such as the mobile devices' energy. We propose an auction-based online incentive mechanism, FLORA, which allows users to submit bids dynamically and repetitively and compensates such bids subject to each user's long-term battery capacity. We formulate a nonlinear mixed-integer program to capture the social cost minimization in the federated learning system. Then we design multiple polynomial-time online algorithms, including a fractional online algorithm and a randomized rounding algorithm to select winning bids and control training accuracy, as well as a payment allocation algorithm to calculate the remuneration based on the bid-winning probabilities. Maintaining the satisfiable quality of the global model that is trained, our approach works on the fly without relying on the unknown future inputs, and achieves provably a sublinear regret and a sublinear fit over time while attaining the economic properties of truthfulness and individual rationality in expectation. Extensive trace-driven evaluations have confirmed the practical superiority of FLORA over existing alternatives.
引用
收藏
页码:5129 / 5144
页数:16
相关论文
共 45 条
[31]  
PyTorch, About Us
[32]   Energy Demand Prediction with Federated Learning for Electric Vehicle Networks [J].
Saputra, Yuris Mulya ;
Dinh Thai Hoang ;
Nguyen, Diep N. ;
Dutkiewicz, Eryk ;
Mueck, Markus Dominik ;
Srikanteswara, Srikathyayani .
2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
[33]  
Tran NH, 2019, IEEE INFOCOM SER, P1387, DOI [10.1109/INFOCOM.2019.8737464, 10.1109/infocom.2019.8737464]
[34]  
Wang H, 2020, IEEE INFOCOM SER, P1698, DOI [10.1109/INFOCOM41043.2020.9155494, 10.1109/infocom41043.2020.9155494]
[35]   CMFL: Mitigating Communication Overhead for Federated Learning [J].
Wang, Luping ;
Wang, Wei ;
Li, Bo .
2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019), 2019, :954-964
[36]   Adaptive Federated Learning in Resource Constrained Edge Computing Systems [J].
Wang, Shiqiang ;
Tuor, Tiffany ;
Salonidis, Theodoros ;
Leung, Kin K. ;
Makaya, Christian ;
He, Ting ;
Chan, Kevin .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (06) :1205-1221
[37]  
Xiao H., 2017, arXiv preprint arXiv: 1708.07747
[38]   Energy Efficient Federated Learning Over Wireless Communication Networks [J].
Yang, Zhaohui ;
Chen, Mingzhe ;
Saad, Walid ;
Hong, Choong Seon ;
Shikh-Bahaei, Mohammad .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (03) :1935-1949
[39]   Federated Learning in Vehicular Edge Computing: A Selective Model Aggregation Approach [J].
Ye, Dongdong ;
Yu, Rong ;
Pan, Miao ;
Han, Zhu .
IEEE ACCESS, 2020, 8 :23920-23935
[40]   A Sustainable Incentive Scheme for Federated Learning [J].
Yu, Han ;
Liu, Zelei ;
Liu, Yang ;
Chen, Tianjian ;
Cong, Mingshu ;
Weng, Xi ;
Niyato, Dusit ;
Yang, Qiang .
IEEE INTELLIGENT SYSTEMS, 2020, 35 (04) :58-69