Accelerating Convergence in Split Learning for Time-Varying and Resource-Limited Environments

被引:0
作者
Marinova, Matea [1 ]
Rakovic, Valentin [1 ]
机构
[1] Ss Cyril & Methodius Univ, Fac Elect Engn & Informat Technol, Skopje, North Macedonia
来源
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024 | 2024年
关键词
Convergence rate maximization; deep neural networks; delay minimization; optimal cut layer; split learning;
D O I
10.1109/MELECON56669.2024.10608579
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Split Learning (SL) is a distributed learning paradigm characterized by the neural network partitioning into distinct client-side and server-side segments. This work focuses on optimizing SL's performance for time-varying and resource-constrained systems. Specifically, the main goal in the paper is to determine the optimal cut layer position in the neural network that minimizes the total training delay, i.e. maximizes the rate of convergence. The performance analysis shows that the proposed cut layer selection algorithm outperforms State of the Art solutions.
引用
收藏
页码:13 / 18
页数:6
相关论文
共 16 条
  • [1] Communication-Efficient Distributed Learning: An Overview
    Cao, Xuanyu
    Basar, Tamer
    Diggavi, Suhas
    Eldar, Yonina C.
    Letaief, Khaled B.
    Poor, H. Vincent
    Zhang, Junshan
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (04) : 851 - 873
  • [2] End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things
    Gao, Yansong
    Kim, Minki
    Abuadbba, Sharif
    Kim, Yeonjae
    Thapa, Chandra
    Kim, Kyuyeon
    Camtep, Seyit A.
    Kim, Hyoungshick
    Nepal, Surya
    [J]. 2020 INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS 2020), 2020, : 91 - 100
  • [3] Distributed learning of deep neural network over multiple agents
    Gupta, Otkrist
    Raskar, Ramesh
    [J]. JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2018, 116 : 1 - 8
  • [4] Kim M., 2022, arXiv
  • [5] Krizhevsky A., 2009, LEARNING MULTIPLE LA, DOI DOI 10.1561/2200000056
  • [6] Lin Z, 2024, Arxiv, DOI arXiv:2303.15991
  • [7] Poirot M. G., 2019, 33 C NEUR INF PROC S
  • [8] Resource Allocation of NOMA Communication Systems for Federated Learning
    Poposka, Marija
    Jovanovski, Borche
    Rakovic, Valentin
    Denkovski, Daniel
    Hadzi-Velkov, Zoran
    [J]. IEEE COMMUNICATIONS LETTERS, 2023, 27 (08) : 2108 - 2112
  • [9] ARES: Adaptive Resource-Aware Split Learning for Internet of Things
    Samikwa, Eric
    Di Maio, Antonio
    Braun, Torsten
    [J]. COMPUTER NETWORKS, 2022, 218
  • [10] Simonyan K, 2015, Arxiv, DOI [arXiv:1409.1556, 10.48550/arXiv.1409.1556,CoRR]