Bandit-based Communication-Efficient Client Selection Strategies for Federated Learning

被引:47
作者
Cho, Yae Jee [1 ]
Gupta, Samarth [1 ]
Joshi, Gauri [1 ]
Yagan, Osman [1 ]
机构
[1] Carnegie Mellon Univ, Elect & Comp Engn, Pittsburgh, PA 15213 USA
来源
2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS | 2020年
基金
美国国家科学基金会;
关键词
distributed optimization; federated learning; fairness; client selection; multi-armed bandits;
D O I
10.1109/IEEECONF51394.2020.9443523
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to communication constraints and intermittent client availability in federated learning, only a subset of clients can participate in each training round. While most prior works assume uniform and unbiased client selection, recent work on biased client selection [1] has shown that selecting clients with higher local losses can improve error convergence speed. However, previously proposed biased selection strategies either require additional communication cost for evaluating the exact local loss or utilize stale local loss, which can even make the model diverge. In this paper, we present a bandit-based communication-efficient client selection strategy UCB-CS that achieves faster convergence with lower communication overhead. We also demonstrate how client selection can be used to improve fairness.
引用
收藏
页码:1066 / 1069
页数:4
相关论文
共 19 条
[1]  
[Anonymous], 1998, COMPUTER
[2]   Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems [J].
Bubeck, Sebastien ;
Cesa-Bianchi, Nicolo .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2012, 5 (01) :1-122
[3]  
Cho Y. J., 2020, ARXIV
[4]  
Goetz J., 2019, ARXIV190912641
[5]  
Gupta S., 2018, CORRELATED MULTIARME
[6]  
Gupta Samarth, 2020, MULTIARMED BANDITS C
[7]  
Hsu T.-M. H., 2019, MEASURING EFFECTS NO
[8]  
Joe-Wong C, 2012, IEEE INFOCOM SER, P1206, DOI 10.1109/INFCOM.2012.6195481
[9]  
Kairouz P, 2019, ARXIV PREPRINT ARXIV
[10]  
Lan TA, 2010, IEEE INFOCOM SER