Effectively Heterogeneous Federated Learning: A Pairing and Split Learning Based Approach

被引:2
作者
Shen, Jinglong [1 ]
Wang, Xiucheng [1 ]
Cheng, Nan [1 ]
Ma, Longfei [1 ]
Zhou, Conghao [2 ]
Zhang, Yuan [3 ]
机构
[1] Xidian Univ, Sch Telecommun Engn, Xian, Peoples R China
[2] Univ Waterloo, Dept Elect Comp Engn, Waterloo, ON, Canada
[3] Univ Elect Sci & Technol China, Sch CSE, Chengdu, Peoples R China
来源
IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM | 2023年
关键词
Federated learning; split learning; client heterogeneity; client-pairing; greedy;
D O I
10.1109/GLOBECOM54140.2023.10437666
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated Learning (FL) is a promising paradigm widely used in privacy-preserving machine learning. It enables distributed devices to collaboratively train a model while avoiding data transfer between clients. Nevertheless, FL suffers from bottlenecks in training speed due to client heterogeneity, resulting in increased training latency and server aggregation lagging. To address this issue, a novel Split Federated Learning (SFL) framework is proposed. It pairs clients with different computational resources based on their computational resources and inter-client communication rates. The neural network model is split into two parts at the logical level, and each client computes only its assigned part using Split Learning (SL) to accomplish forward inference and backward training. Besides, a heuristic greedy algorithm is proposed to effectively deal with the client pairing problem by reconstructing the training latency optimization as a graph edge selection problem. Simulation results show that the proposed method can significantly improve the FL training speed and achieve high performance in both independent identical distribution (IID) and Non-IID data distribution.
引用
收藏
页码:5847 / 5852
页数:6
相关论文
共 12 条
  • [1] Secure Federated Matrix Factorization
    Chai, Di
    Wang, Leye
    Chen, Kai
    Yang, Qiang
    [J]. IEEE INTELLIGENT SYSTEMS, 2021, 36 (05) : 11 - 19
  • [2] Distributed learning of deep neural network over multiple agents
    Gupta, Otkrist
    Raskar, Ramesh
    [J]. JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2018, 116 : 1 - 8
  • [3] HE KM, 2016, PROC CVPR IEEE, P770, DOI DOI 10.1109/CVPR.2016.90
  • [4] Jinglong S., 2023, RINGSFL ADAPTIVE SPL
  • [5] Krizhevsky A, 2009, Technical report
  • [6] McMahan B., 2017, Artificial Intelligence and Statistics, V54, P1273, DOI DOI 10.48550/ARXIV.1602.05629
  • [7] Shen J., 2021, PROC IEEE VEH TECHNO, P1
  • [8] Thapa C, 2022, AAAI CONF ARTIF INTE, P8485
  • [9] FEDBERT: When Federated Learning Meets Pre-training
    Tian, Yuanyishu
    Wan, Yao
    Lyu, Lingjuan
    Yao, Dezhong
    Jin, Hai
    Sun, Lichao
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [10] Multi-Agent Imitation Learning for Pervasive Edge Computing: A Decentralized Computation Offloading Algorithm
    Wang, Xiaojie
    Ning, Zhaolong
    Guo, Song
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (02) : 411 - 425