AdaptSFL: Adaptive Split Federated Learning in Resource-Constrained Edge Networks

被引:0
作者
Lin, Zheng [1 ]
Qu, Guanqiao [1 ]
Wei, Wei [1 ]
Chen, Xianhao [1 ]
Leung, Kin K. [2 ,3 ]
机构
[1] Univ Hong Kong, Dept Elect & Elect Engn, Hong Kong, Peoples R China
[2] Imperial Coll, Dept Elect & Elect Engn, London SW7 2BT, England
[3] Imperial Coll, Dept Comp, London SW7 2BT, England
来源
IEEE TRANSACTIONS ON NETWORKING | 2025年
基金
英国工程与自然科学研究理事会;
关键词
Training; Convergence; Computational modeling; Adaptation models; Optimization; Servers; Accuracy; Data models; Analytical models; Federated learning; Distributed learning; split federated learning; client-side model aggregation; model splitting; mobile edge computing; EFFICIENT;
D O I
10.1109/TON.2025.3577790
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The increasing complexity of deep neural networks poses significant barriers to democratizing AI to resource-limited edge devices. To address this challenge, split federated learning (SFL) has emerged as a promising solution that enables device-server co-training through model splitting. However, although system optimization substantially influences the performance of SFL, the problem remains largely uncharted. In this paper, we first provide a unified convergence analysis of SFL, which quantifies the impact of model splitting (MS) and client-side model aggregation (MA) on its learning performance, laying a theoretical foundation for this field. Based on this convergence bound, we introduce AdaptSFL, an adaptive SFL framework to accelerate SFL under resource-constrained edge computing systems. Specifically, AdaptSFL adaptively controls MS and client-side MA to balance communication-computing latency and training convergence. Extensive simulations across various datasets validate that our proposed AdaptSFL framework takes considerably less time to achieve target accuracy than existing benchmarks.
引用
收藏
页数:16
相关论文
共 60 条
[1]   End-to-End Service Auction: A General Double Auction Mechanism for Edge Computing Services [J].
Chen, Xianhao ;
Zhu, Guangyu ;
Ding, Haichuan ;
Zhang, Lan ;
Zhang, Haixia ;
Fang, Yuguang .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2022, 30 (06) :2616-2629
[2]   Federated Learning Over Multihop Wireless Networks With In-Network Aggregation [J].
Chen, Xianhao ;
Zhu, Guangyu ;
Deng, Yiqin ;
Fang, Yuguang .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (06) :4622-4634
[3]   Low-Latency Federated Learning With DNN Partition in Distributed Industrial IoT Networks [J].
Deng, Xiumei ;
Li, Jun ;
Ma, Chuan ;
Wei, Kang ;
Shi, Long ;
Ding, Ming ;
Chen, Wen .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (03) :755-775
[4]   Actions at the Edge: Jointly Optimizing the Resources in Multi-Access Edge Computing [J].
Deng, Yiqin ;
Chen, Xianhao ;
Zhu, Guangyu ;
Fang, Yuguang ;
Chen, Zhigang ;
Deng, Xiaoheng .
IEEE WIRELESS COMMUNICATIONS, 2022, 29 (02) :192-198
[5]  
Dinkelbach W., 1967, MANAGE SCI, V13, P492, DOI [10.1287/mnsc.13.7.492, DOI 10.1287/MNSC.13.7.492, 242488]
[6]  
Fang ZH, 2024, Arxiv, DOI arXiv:2410.02592
[7]  
Fang ZH, 2024, Arxiv, DOI arXiv:2404.06448
[8]   LightFed: An Efficient and Secure Federated Edge Learning System on Model Splitting [J].
Guo, Jialin ;
Wu, Jie ;
Liu, Anfeng ;
Xiong, Neal N. .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) :2701-2713
[9]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[10]   Accelerating Federated Learning With Model Segmentation for Edge Networks [J].
Hu, Mingda ;
Zhang, Jingjing ;
Wang, Xiong ;
Liu, Shengyun ;
Lin, Zheng .
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2025, 9 (01) :242-254