Enhancing Edge-Assisted Federated Learning with Asynchronous Aggregation and Cluster Pairing

被引:1
作者
Sha, Xiaobao [1 ,2 ]
Sun, Wenjian [1 ]
Liu, Xiang [1 ]
Luo, Yang [1 ,2 ]
Luo, Chunbo [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Huzhou, Huzhou 313001, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
关键词
federated learning; edge computing; Non-IID data;
D O I
10.3390/electronics13112135
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is widely regarded as highly promising because it enables the collaborative training of high-performance machine learning models among a large number of clients while preserving data privacy by keeping the data local. However, many existing FL frameworks have a two-layered architecture, thus requiring the frequent exchange of large-scale model parameters between clients and remote cloud servers over often unstable networks and resulting in significant communication overhead and latency. To address this issue, we propose to introduce edge servers between the clients and the cloud server to assist in aggregating local models, thus combining asynchronous client-edge model aggregation with synchronous edge-cloud model aggregation. By leveraging the clients' idle time to accelerate training, the proposed framework can achieve faster convergence and reduce the amount of communication traffic. To make full use of the grouping properties inherent in three-layer FL, we propose a similarity matching strategy between edges and clients, thus improving the effect of asynchronous training. We further propose to introduce model-contrastive learning into the loss function and personalize the clients' local models to address the potential learning issues resulting from asynchronous local training in order to further improve the convergence speed. Extensive experiments confirm that our method exhibits significant improvements in model accuracy and convergence speed when compared with other state-of-the-art federated learning architectures.
引用
收藏
页数:16
相关论文
共 37 条
  • [1] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [2] Collins L, 2021, PR MACH LEARN RES, V139
  • [3] Dennis DK, 2021, PR MACH LEARN RES, V139
  • [4] In-Network Computation for Large-Scale Federated Learning Over Wireless Edge Networks
    Dinh, Thinh Quang
    Nguyen, Diep N.
    Hoang, Dinh Thai
    Pham, Tran Vu
    Dutkiewicz, Eryk
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (10) : 5918 - 5932
  • [5] FedGroup: Efficient Federated Learning via Decomposed Similarity-Based Clustering
    Duan, Moming
    Liu, Duo
    Ji, Xinyuan
    Liu, Renping
    Liang, Liang
    Chen, Xianzhang
    Tan, Yujuan
    [J]. 19TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2021), 2021, : 228 - 237
  • [6] Combining Federated Learning and Edge Computing Toward Ubiquitous Intelligence in 6G Network: Challenges, Recent Advances, and Future Directions
    Duan, Qiang
    Huang, Jun
    Hu, Shijing
    Deng, Ruijun
    Lu, Zhihui
    Yu, Shui
    [J]. IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2023, 25 (04): : 2892 - 2950
  • [7] Fundamental Technologies in Modern Speech Recognition
    Furui, Sadaoki
    Deng, Li
    Gales, Mark
    Ney, Hermann
    Tokuda, Keiichi
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2012, 29 (06) : 16 - 17
  • [8] An Efficient Framework for Clustered Federated Learning
    Ghosh, Avishek
    Chung, Jichan
    Yin, Dong
    Ramchandran, Kannan
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (12) : 8076 - 8091
  • [9] Ghosh A, 2019, Arxiv, DOI arXiv:1906.06629
  • [10] Adaptive Client Clustering for Efficient Federated Learning Over Non-IID and Imbalanced Data
    Gong, Biyao
    Xing, Tianzhang
    Liu, Zhidan
    Xi, Wei
    Chen, Xiaojiang
    [J]. IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 1051 - 1065