Semi-Decentralized Federated Edge Learning for Fast Convergence on Non-IID Data

被引:22
|
作者
Sun, Yuchang [1 ]
Shao, Jiawei [1 ]
Mao, Yuyi [2 ]
Wang, Jessie Hui [3 ]
Zhang, Jun [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept ECE, Hong Kong, Peoples R China
[2] Hong Kong Polytech Univ, Dept EIE, Hong Kong, Peoples R China
[3] Tsinghua Univ, Inst Network Sci & Cyberspace, BNRist, Beijing, Peoples R China
关键词
Federated learning; non-IID data; distributed machine learning; communication efficiency;
D O I
10.1109/WCNC51071.2022.9771904
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated edge learning (FEEL) has emerged as an effective approach to reduce the large communication latency in Cloud-based machine learning solutions, while preserving data privacy. Unfortunately, the learning performance of FEEL may be compromised due to limited training data in a single edge cluster. In this paper, we investigate a novel framework of FEEL, namely semi-decentralized federated edge learning (SD-FEEL). By allowing model aggregation across different edge clusters, SD-FEEL enjoys the benefit of FEEL in reducing the training latency, while improving the learning performance by accessing richer training data from multiple edge clusters. A training algorithm for SD-FEEL with three main procedures in each round is presented, including local model updates, intra-cluster and inter-cluster model aggregations, which is proved to converge on non-independent and identically distributed (non-IID) data. We also characterize the interplay between the network topology of the edge servers and the communication overhead of inter-cluster model aggregation on the training performance. Experiment results corroborate our analysis and demonstrate the effectiveness of SD-FFEL in achieving faster convergence than traditional federated learning architectures. Besides, guidelines on choosing critical hyper-parameters of the training algorithm are also provided.
引用
收藏
页码:1898 / 1903
页数:6
相关论文
共 50 条
  • [21] Graph-Attention-Based Decentralized Edge Learning for Non-IID Data
    Tian, Zhuojun
    Zhang, Zhaoyang
    Jin, Richeng
    2023 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS, 2023, : 110 - 115
  • [22] FedGAN: A Federated Semi-supervised Learning from Non-IID Data
    Zhao, Chen
    Gao, Zhipeng
    Wang, Qian
    Mo, Zijia
    Yu, Xinlei
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS (WASA 2022), PT II, 2022, 13472 : 181 - 192
  • [23] Client Selection for Federated Learning With Non-IID Data in Mobile Edge Computing
    Zhang, Wenyu
    Wang, Xiumin
    Zhou, Pan
    Wu, Weiwei
    Zhang, Xinglin
    IEEE ACCESS, 2021, 9 : 24462 - 24474
  • [24] Differentially private federated learning with non-IID data
    Cheng, Shuyan
    Li, Peng
    Wang, Ruchuan
    Xu, He
    COMPUTING, 2024, 106 (07) : 2459 - 2488
  • [25] Adaptive Federated Deep Learning With Non-IID Data
    Zhang, Ze-Hui
    Li, Qing-Dan
    Fu, Yao
    He, Ning-Xin
    Gao, Tie-Gang
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (12): : 2493 - 2506
  • [26] A Novel Approach for Federated Learning with Non-IID Data
    Nguyen, Hiep
    Warrier, Harikrishna
    Gupta, Yogesh
    2022 9TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE, ISCMI, 2022, : 62 - 67
  • [27] Federated Dictionary Learning from Non-IID Data
    Gkillas, Alexandros
    Ampeliotis, Dimitris
    Berberidis, Kostas
    2022 IEEE 14TH IMAGE, VIDEO, AND MULTIDIMENSIONAL SIGNAL PROCESSING WORKSHOP (IVMSP), 2022,
  • [28] EFL: ELASTIC FEDERATED LEARNING ON NON-IID DATA
    Ma, Zichen
    Lu, Yu
    Li, Wenye
    Cui, Shuguang
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
  • [29] Dual Adversarial Federated Learning on Non-IID Data
    Zhang, Tao
    Yang, Shaojing
    Song, Anxiao
    Li, Guangxia
    Dong, Xuewen
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2022, PT III, 2022, 13370 : 233 - 246
  • [30] Decoupled Federated Learning for ASR with Non-IID Data
    Zhu, Han
    Wang, Jindong
    Cheng, Gaofeng
    Zhang, Pengyuan
    Yan, Yonghong
    INTERSPEECH 2022, 2022, : 2628 - 2632