Reducing Model Cost Based on the Weights of Each Layer for Federated Learning Clustering

被引:9
作者
Kim, Hyungbin [1 ]
Kim, Yongho [1 ]
Park, Hyunhee [1 ]
机构
[1] Myongji Univ, Dept Informat & Commun Engn, Yongin, South Korea
来源
12TH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN 2021) | 2021年
基金
新加坡国家研究基金会;
关键词
Federated learning; Distributed machine learning; Distributed databases; Distributed processing; Clustering algorithms; Computational modeling;
D O I
10.1109/ICUFN49451.2021.9528575
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated Learning (FL) has a different learning framework from existing machine learning, which had to centralize training data. Federated learning has the advantage of protecting privacy because learning is performed on each client device rather than the central server, and only the weight parameter values, which are the learning results, are sent to the central server. However, the performance of federated learning shows relatively low performance compared to cloud computing, and in reality, it is difficult to build a federated learning environment due to the high communication cost between the server and multiple clients. In this paper, we propose Federated Learning with Clustering algorithms (FLC). The proposed FLC is a method of clustering clients with similar characteristics by analyzing the weights of each layer of a machine learning model, and performing federated learning among the clustered clients. The proposed FLC can reduce the communication cost for each model by reducing the number of clients corresponding to each model. As a result of extensive simulation, it is confirmed that the accuracy is improved by 2.4% and the loss by 47% through the proposed FLC compared to the standard federated learning.
引用
收藏
页码:405 / 408
页数:4
相关论文
共 9 条
[1]   A View of Cloud Computing [J].
Armbrust, Michael ;
Fox, Armando ;
Griffith, Rean ;
Joseph, Anthony D. ;
Katz, Randy ;
Konwinski, Andy ;
Lee, Gunho ;
Patterson, David ;
Rabkin, Ariel ;
Stoica, Ion ;
Zaharia, Matei .
COMMUNICATIONS OF THE ACM, 2010, 53 (04) :50-58
[2]  
Bonawitz K., 2019, Proc Mach Learn Syst, V1, P374
[3]  
Chen F., 2018, ARXIV180207876
[4]   RPN: A Residual Pooling Network for Efficient Federated Learning [J].
Huang, Anbu ;
Chen, Yuanyuan ;
Liu, Yang ;
Chen, Tianjian ;
Yang, Qiang .
ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 :1223-1229
[5]  
Kairouz P., 2019, arXiv preprint arXiv:1912.04977
[6]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324
[7]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[8]  
Nishio T, 2019, IEEE ICC
[9]  
Zhao Y., 2018, ARXIV 180600582, P1