Joint Optimization of Model Partition and Resource Allocation for Split Federated Learning Over Vehicular Edge Networks

被引:2
作者
Wu, Maoqiang [1 ,2 ]
Yang, Ruibin [1 ]
Huang, Xumin [1 ,2 ]
Wu, Yuan [3 ,4 ]
Kang, Jiawen [1 ]
Xie, Shengli [1 ]
机构
[1] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
[2] Univ Macau, State Key Lab Internet Things Smart City, Taipa 999078, Peoples R China
[3] Univ Macau, Key Lab Internet Things Smart City, Taipa 999078, Macao, Peoples R China
[4] Univ Macau, Dept Comp & Informat Sci, Taipa 999078, Peoples R China
基金
中国国家自然科学基金;
关键词
Servers; Computational modeling; Resource management; Training; Performance evaluation; Federated learning; Energy consumption; Split federated learning; model partition; resource allocation; vehicular edge networks;
D O I
10.1109/TVT.2024.3399011
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Split federated learning (SFL) has been regarded as an efficient paradigm to enable federated learning and reduce the computation burdens at the devices by allowing them to train parts of the model. However, deploying SFL over resource-constrained vehicular edge networks is challenging, and a cost-effective scheme is necessitated to minimize the total time and energy consumption of vehicular devices. To this end, we use an improved reinforcement learning method to present a joint optimization scheme that can efficiently determine the optimal model partition point for each vehicular device and the optimal allocations of the computing resource and bandwidth resource among all vehicular devices. Experimental results validate the effectiveness and performance advantages of our proposed scheme.
引用
收藏
页码:15860 / 15865
页数:6
相关论文
共 25 条
  • [1] Han PC, 2024, Arxiv, DOI arXiv:2402.15166
  • [2] Federated Learning-Empowered AI-Generated Content in Wireless Networks
    Huang, Xumin
    Li, Peichun
    Du, Hongyang
    Kang, Jiawen
    Niyato, Dusit
    Kim, Dong In
    Wu, Yuan
    [J]. IEEE NETWORK, 2024, 38 (05): : 304 - 313
  • [3] ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning
    Li, Jingtao
    Rakin, Adnan Siraj
    Chen, Xing
    He, Zhezhi
    Fan, Deliang
    Chakrabarti, Chaitali
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10184 - 10192
  • [4] Li Y., 2024, P ADV NEUR INF PROC, V36, P56700
  • [5] Wireless Distributed Learning: A New Hybrid Split and Federated Learning Approach
    Liu, Xiaolan
    Deng, Yansha
    Mahmoodi, Toktam
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (04) : 2650 - 2665
  • [6] Liu Y, 2020, CHINA COMMUN, V17, P105, DOI 10.23919/JCC.2020.09.009
  • [7] Dynamic Neural Network-Based Resource Management for Mobile Edge Computing in 6G Networks
    Ma, Longfei
    Cheng, Nan
    Zhou, Conghao
    Wang, Xiucheng
    Lu, Ning
    Zhang, Ning
    Aldubaikhy, Khalid
    Alqasir, Abdullah
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (03) : 953 - 967
  • [8] On the Feasibility of Split Learning, Transfer Learning and Federated Learning for Preserving Security in ITS Systems
    Otoum, Safa
    Guizani, Nadra
    Mouftah, Hussein
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (07) : 7462 - 7470
  • [9] RingSFL: An Adaptive Split Federated Learning Towards Taming Client Heterogeneity
    Shen, Jinglong
    Cheng, Nan
    Wang, Xiucheng
    Lyu, Feng
    Xu, Wenchao
    Liu, Zhi
    Aldubaikhy, Khalid
    Shen, Xuemin
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 5462 - 5478
  • [10] Sutton RS, 2018, ADAPT COMPUT MACH LE, P1