Accelerating Split Federated Learning Over Wireless Communication Networks

被引:5
作者
Xu, Ce [1 ]
Li, Jinxuan [2 ]
Liu, Yuan [1 ]
Ling, Yushi [2 ]
Wen, Miaowen [1 ]
机构
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou 510641, Peoples R China
[2] Guangdong Power Grid Co Ltd, Guangzhou Power Supply Bur, CSG, Guangzhou 510620, Peoples R China
关键词
Split federated learning; model splitting; resource allocation;
D O I
10.1109/TWC.2023.3327372
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The development of artificial intelligence (AI) provides opportunities for the promotion of deep neural network (DNN)-based applications. However, the large amount of parameters and computational complexity of DNN makes it difficult to deploy it on edge devices which are resource-constrained. An efficient method to address this challenge is model partition/splitting, in which DNN is divided into two parts which are deployed on device and server respectively for co-training or co-inference. In this paper, we consider a split federated learning (SFL) framework that combines the parallel model training mechanism of federated learning (FL) and the model splitting structure of split learning (SL). We consider a practical scenario of heterogeneous devices with individual split points of DNN. We formulate a joint problem of split point selection and bandwidth allocation to minimize the system latency. By using alternating optimization, we decompose the problem into two sub-problems and solve them optimally. Experiment results demonstrate the superiority of our work in latency reduction and accuracy improvement.
引用
收藏
页码:5587 / 5599
页数:13
相关论文
共 38 条
  • [1] Distributed learning of deep neural network over multiple agents
    Gupta, Otkrist
    Raskar, Ramesh
    [J]. JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2018, 116 : 1 - 8
  • [2] Hu C, 2019, IEEE INFOCOM SER, P1423, DOI [10.1109/INFOCOM.2019.8737614, 10.1109/infocom.2019.8737614]
  • [3] Accelerating Federated Edge Learning via Topology Optimization
    Huang, Shanfeng
    Zhang, Zezhong
    Wang, Shuai
    Wang, Rui
    Huang, Kaibin
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (03) : 2056 - 2070
  • [4] Jeon J, 2020, 2020 34TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2020), P7, DOI [10.1109/icoin48656.2020.9016486, 10.1109/ICOIN48656.2020.9016486]
  • [5] Model Pruning Enables Efficient Federated Learning on Edge Devices
    Jiang, Yuang
    Wang, Shiqiang
    Valls, Victor
    Ko, Bong Jun
    Lee, Wei-Han
    Leung, Kin K.
    Tassiulas, Leandros
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 10374 - 10386
  • [6] Kang YP, 2017, TWENTY-SECOND INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXII), P615, DOI 10.1145/3037697.3037698
  • [7] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [8] Speeding Up Distributed Machine Learning Using Codes
    Lee, Kangwook
    Lam, Maximilian
    Pedarsani, Ramtin
    Papailiopoulos, Dimitris
    Ramchandran, Kannan
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (03) : 1514 - 1529
  • [9] Edge AI: On-Demand Accelerating Deep Neural Network Inference via Edge Computing
    Li, En
    Zeng, Liekang
    Zhou, Zhi
    Chen, Xu
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (01) : 447 - 457
  • [10] Federated Learning: Challenges, Methods, and Future Directions
    Li, Tian
    Sahu, Anit Kumar
    Talwalkar, Ameet
    Smith, Virginia
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (03) : 50 - 60