Communication-Efficient Federated Learning via Predictive Coding

被引:10
作者
Yue, Kai [1 ]
Jin, Richeng [1 ]
Wong, Chau-Wai [1 ]
Dai, Huaiyu [1 ]
机构
[1] NC State Univ, Dept Elect & Comp Engn, Raleigh, NC 27695 USA
基金
美国国家科学基金会;
关键词
Predictive models; Servers; Collaborative work; Predictive coding; Entropy coding; Costs; Quantization (signal); Federated learning; distributed optimization; predictive coding;
D O I
10.1109/JSTSP.2022.3142678
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning can enable remote workers to collaboratively train a shared machine learning model while allowing training data to be kept locally. In the use case of wireless mobile devices, the communication overhead is a critical bottleneck due to limited power and bandwidth. Prior work has utilized various data compression tools such as quantization and sparsification to reduce the overhead. In this paper, we propose a predictive coding based compression scheme for federated learning. The scheme has shared prediction functions among all devices and allows each worker to transmit a compressed residual vector derived from the reference. In each communication round, we select the predictor and quantizer based on the rate-distortion cost, and further reduce the redundancy with entropy coding. Extensive simulations reveal that the communication cost can be reduced up to 99% with even better learning performance when compared with other baseline methods.
引用
收藏
页码:369 / 380
页数:12
相关论文
共 32 条
[1]  
Alistarh D, 2017, ADV NEUR IN, V30
[2]  
Bernstein J., 2018, PROC INT C LEARN REP
[3]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[4]  
ELIAS P, 1975, IEEE T INFORM THEORY, V21, P194, DOI 10.1109/TIT.1975.1055349
[5]  
Haddadpour F, 2021, PR MACH LEARN RES, V130
[6]  
Hsu T.-M. H., 2019, arXiv
[7]   Advances and Open Problems in Federated Learning [J].
Kairouz, Peter ;
McMahan, H. Brendan ;
Avent, Brendan ;
Bellet, Aurelien ;
Bennis, Mehdi ;
Bhagoji, Arjun Nitin ;
Bonawitz, Kallista ;
Charles, Zachary ;
Cormode, Graham ;
Cummings, Rachel ;
D'Oliveira, Rafael G. L. ;
Eichner, Hubert ;
El Rouayheb, Salim ;
Evans, David ;
Gardner, Josh ;
Garrett, Zachary ;
Gascon, Adria ;
Ghazi, Badih ;
Gibbons, Phillip B. ;
Gruteser, Marco ;
Harchaoui, Zaid ;
He, Chaoyang ;
He, Lie ;
Huo, Zhouyuan ;
Hutchinson, Ben ;
Hsu, Justin ;
Jaggi, Martin ;
Javidi, Tara ;
Joshi, Gauri ;
Khodak, Mikhail ;
Konecny, Jakub ;
Korolova, Aleksandra ;
Koushanfar, Farinaz ;
Koyejo, Sanmi ;
Lepoint, Tancrede ;
Liu, Yang ;
Mittal, Prateek ;
Mohri, Mehryar ;
Nock, Richard ;
Ozgur, Ayfer ;
Pagh, Rasmus ;
Qi, Hang ;
Ramage, Daniel ;
Raskar, Ramesh ;
Raykova, Mariana ;
Song, Dawn ;
Song, Weikang ;
Stich, Sebastian U. ;
Sun, Ziteng ;
Suresh, Ananda Theertha .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2021, 14 (1-2) :1-210
[8]  
Karimireddy SP, 2020, PR MACH LEARN RES, V119
[9]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001
[10]  
Krizhevsky A., 2009, LEARNING MULTIPLE LA