Efficient and Less Centralized Federated Learning

被引:12
作者
Chou, Li [1 ]
Liu, Zichang [1 ]
Wang, Zhuang [1 ]
Shrivastava, Anshumali [1 ]
机构
[1] Rice Univ, Dept Comp Sci, Houston, TX 77005 USA
来源
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES | 2021年 / 12975卷
基金
美国国家科学基金会;
关键词
Machine learning; Federated learning; Distributed systems;
D O I
10.1007/978-3-030-86486-6_47
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the rapid growth in mobile computing, massive amounts of data and computing resources are now located at the edge. To this end, Federated learning (FL) is becoming a widely adopted distributed machine learning (ML) paradigm, which aims to harness this expanding skewed data locally in order to develop rich and informative models. In centralized FL, a collection of devices collaboratively solve a ML task under the coordination of a central server. However, existing FL frameworks make an over-simplistic assumption about network connectivity and ignore the communication bandwidth of the different links in the network. In this paper, we present and study a novel FL algorithm, in which devices mostly collaborate with other devices in a pairwise manner. Our nonparametric approach is able to exploit network topology to reduce communication bottlenecks. We evaluate our approach on various FL benchmarks and demonstrate that our method achieves 10x better communication efficiency and around 8% increase in accuracy compared to the centralized approach.
引用
收藏
页码:772 / 787
页数:16
相关论文
共 31 条
[1]  
Bellet A, 2018, PR MACH LEARN RES, V84
[2]  
Bonawitz K., 2019, Proceedings of machine learning and systems, V1, P374, DOI DOI 10.48550/ARXIV.1902.01046
[3]  
Boyd S, 2005, IEEE INFOCOM SER, P1653
[4]   Federated learning with hierarchical clustering of local updates to improve training on non-IID data [J].
Briggs, Christopher ;
Fan, Zhong ;
Andras, Peter .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[5]  
Caldas S., 2019, Leaf: A benchmark for federated settings
[6]  
Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
[7]  
Colin I, 2016, PR MACH LEARN RES, V48
[8]  
Demers A., 1987, P 6 TNNUAL ACM S PRI, P1, DOI DOI 10.1145/41840.41841
[9]  
Ghosh Avishek, 2020, NEURIPS
[10]  
Henrion M, 1986, UAI