FedCO: Communication-Efficient Federated Learning via Clustering Optimization

被引:7
作者
Al-Saedi, Ahmed A. [1 ]
Boeva, Veselka [1 ]
Casalicchio, Emiliano [1 ,2 ]
机构
[1] Blekinge Inst Technol, Dept Comp Sci, SE-37179 Karlskrona, Sweden
[2] Sapienza Univ Rome, Dept Comp Sci, I-00185 Rome, Italy
关键词
federated learning; Internet of Things; clustering; communication efficiency; convolutional neural network;
D O I
10.3390/fi14120377
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) provides a promising solution for preserving privacy in learning shared models on distributed devices without sharing local data on a central server. However, most existing work shows that FL incurs high communication costs. To address this challenge, we propose a clustering-based federated solution, entitled Federated Learning via Clustering Optimization (FedCO), which optimizes model aggregation and reduces communication costs. In order to reduce the communication costs, we first divide the participating workers into groups based on the similarity of their model parameters and then select only one representative, the best performing worker, from each group to communicate with the central server. Then, in each successive round, we apply the Silhouette validation technique to check whether each representative is still made tight with its current cluster. If not, the representative is either moved into a more appropriate cluster or forms a cluster singleton. Finally, we use split optimization to update and improve the whole clustering solution. The updated clustering is used to select new cluster representatives. In that way, the proposed FedCO approach updates clusters by repeatedly evaluating and splitting clusters if doing so is necessary to improve the workers' partitioning. The potential of the proposed method is demonstrated on publicly available datasets and LEAF datasets under the IID and Non-IID data distribution settings. The experimental results indicate that our proposed FedCO approach is superior to the state-of-the-art FL approaches, i.e., FedAvg, FedProx, and CMFL, in reducing communication costs and achieving a better accuracy in both the IID and Non-IID cases.
引用
收藏
页数:27
相关论文
共 48 条
  • [11] Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
  • [12] A Survey of Deep Learning: Platforms, Applications and Emerging Rlesearch Trends
    Hatcher, William Grant
    Yu, Wei
    [J]. IEEE ACCESS, 2018, 6 : 24411 - 24432
  • [13] Densely Connected Convolutional Networks
    Huang, Gao
    Liu, Zhuang
    van der Maaten, Laurens
    Weinberger, Kilian Q.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2261 - 2269
  • [14] Ji SX, 2019, IEEE IJCNN, DOI 10.1109/ijcnn.2019.8852464
  • [15] Incentive Mechanism for Reliable Federated Learning: A Joint Optimization Approach to Combining Reputation and Contract Theory
    Kang, Jiawen
    Xiong, Zehui
    Niyato, Dusit
    Xie, Shengli
    Zhang, Junshan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (06) : 10700 - 10714
  • [16] Dynamic Clustering in Federated Learning
    Kim, Yeongwoo
    Al Hakim, Ezeddin
    Haraldson, Johan
    Eriksson, Henrik
    da Silva, Jose Mairton B., Jr.
    Fischione, Carlo
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [17] Konecny J., 2016, ARXIV
  • [18] Konečny J, 2016, Arxiv, DOI [arXiv:1610.02527, DOI 10.48550/ARXIV.1610.02527]
  • [19] Krizhevsky A., 2009, Technical report, P1
  • [20] Automated Collaborator Selection for Federated Learning with Multi-armed Bandit Agents
    Larsson, Hannes
    Riaz, Hassam
    Ickin, Selim
    [J]. PROCEEDINGS OF THE 4TH FLEXNETS WORKSHOP ON FLEXIBLE NETWORKS, ARTIFICIAL INTELLIGENCE SUPPORTED NETWORK FLEXIBILITY AND AGILITY (FLEXNETS'21), 2021, : 44 - 49