A Joint Communication and Learning Framework for Hierarchical Split Federated Learning

被引:8
|
作者
Khan, Latif U. [1 ]
Guizani, Mohsen [1 ]
Al-Fuqaha, Ala [2 ]
Hong, Choong Seon [3 ]
Niyato, Dusit [4 ]
Han, Zhu [3 ,5 ,6 ]
机构
[1] Mohamed Bin Zayed Univ Artificial Intelligence, Machine Learning Dept, Abu Dhabi, U Arab Emirates
[2] Hamad Bin Khalifa Univ, Coll Engn & Appl Sci, Comp Sci Dept, Doha, Qatar
[3] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, South Korea
[4] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[5] Univ Houston, Elect & Comp Engn Dept, Houston, TX 77004 USA
[6] Univ Houston, Comp Sci Dept, Houston, TX 77004 USA
关键词
Federated learning (FL); hierarchical FL; Internet of Things (IoT); split learning; NETWORKS;
D O I
10.1109/JIOT.2023.3315673
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In contrast to methods relying on a centralized training, emerging Internet of Things (IoT) applications can employ federated learning (FL) to train a variety of models for performance improvement and improved privacy preservation. FL calls for the distributed training of local models at end-devices, which uses a lot of processing power (i.e., CPU cycles/sec). Most end-devices have computing power limitations, such as IoT temperature sensors. One solution for this problem is split FL. However, split FL has its problems, including a single point of failure, issues with fairness, and a poor convergence rate. We provide a novel framework, called hierarchical split FL (HSFL), to overcome these issues. On grouping, our HSFL framework is built. Partial models are constructed within each group at the devices, with the remaining work done at the edge servers. Each group then performs local aggregation at the edge following the computation of local models. End devices are given access to such an edge aggregated model so they can update their models. For each group, a unique edge aggregated HSFL model is produced by this procedure after a set number of rounds. Shared among edge servers, these edge aggregated HSFL models are then aggregated to produce a global model. Additionally, we propose an optimization problem that takes into account the relative local accuracy (RLA) of devices, transmission latency, transmission energy, and edge servers' compute latency in order to reduce the cost of HSFL. The formulated problem is a mixed-integer nonlinear programming (MINLP) problem and cannot be solved easily. To tackle this challenge, we perform decomposition of the formulated problem to yield subproblems. These subproblems are edge computing resource allocation problem and joint RLA minimization, wireless resource allocation, task offloading, and transmit power allocation subproblem. Due to the convex nature of edge computing, resource allocation is done so utilizing a convex optimizer, as opposed to a block successive upper-bound minimization (BSUM)-based approach for joint RLA minimization, resource allocation, job offloading, and transmit power allocation. Finally, we present the performance evaluation findings for the proposed HSFL scheme.
引用
收藏
页码:268 / 282
页数:15
相关论文
共 50 条
  • [31] FedsLLM: Federated Split Learning for Large Language Models over Communication Networks
    Zhao, Kai
    Yang, Zhaohui
    Huang, Chongwen
    Chen, Xiaoming
    Zhang, Zhaoyang
    2024 INTERNATIONAL CONFERENCE ON UBIQUITOUS COMMUNICATION, UCOM 2024, 2024, : 438 - 443
  • [32] Federated or Split? A Performance and Privacy Analysis of Hybrid Split and Federated Learning Architectures
    Turina, Valeria
    Zhang, Zongshun
    Esposito, Flavio
    Matta, Ibrahim
    2021 IEEE 14TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING (CLOUD 2021), 2021, : 250 - 260
  • [33] A Fair Federated Learning Framework With Reinforcement Learning
    Sun, Yaqi
    Si, Shijing
    Wang, Jianzong
    Dong, Yuhan
    Zhu, Zhitao
    Xiao, Jing
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [34] FlexSplit: A Configurable, Privacy-Preserving Federated-Split Learning Framework
    Wu, Tiantong
    Bandara, H. M. N. Dilum
    Yeoh, Phee Lep
    Thilakarathna, Kanchana
    2023 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS, 2023, : 116 - 121
  • [35] Sparse Communication for Federated Learning
    Thonglek, Kundjanasith
    Takahashi, Keichi
    Ichikawa, Kohei
    Nakasan, Chawanat
    Leelaprute, Pattara
    Iida, Hajimu
    6TH IEEE INTERNATIONAL CONFERENCE ON FOG AND EDGE COMPUTING (ICFEC 2022), 2022, : 1 - 8
  • [36] Timely Communication in Federated Learning
    Buyukates, Baturalp
    Ulukus, Sennur
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [37] A distributed joint extraction framework for sedimentological entities and relations with federated learning
    Wang, Tianheng
    Zheng, Ling
    Lv, Hairong
    Zhou, Chenghu
    Shen, Yunheng
    Qiu, Qinjun
    Li, Yan
    Li, Pufan
    Wang, Guorui
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [38] A Framework for Sustainable Federated Learning
    Guler, Basak
    Yener, Aylin
    2021 19TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC, AND WIRELESS NETWORKS (WIOPT), 2021,
  • [39] Hier-SFL: Client-edge-cloud collaborative traffic classification framework based on hierarchical federated split learning
    Qin, Tian
    Cheng, Guang
    Wei, Yichen
    Yao, Zifan
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 149 : 12 - 24
  • [40] Joint Optimization of Convergence and Latency for Hierarchical Federated Learning Over Wireless Networks
    Sun, Haofeng
    Tian, Hui
    Zheng, Jingheng
    Ni, Wanli
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (03) : 691 - 695