A Joint Communication and Learning Framework for Hierarchical Split Federated Learning

被引:8
|
作者
Khan, Latif U. [1 ]
Guizani, Mohsen [1 ]
Al-Fuqaha, Ala [2 ]
Hong, Choong Seon [3 ]
Niyato, Dusit [4 ]
Han, Zhu [3 ,5 ,6 ]
机构
[1] Mohamed Bin Zayed Univ Artificial Intelligence, Machine Learning Dept, Abu Dhabi, U Arab Emirates
[2] Hamad Bin Khalifa Univ, Coll Engn & Appl Sci, Comp Sci Dept, Doha, Qatar
[3] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, South Korea
[4] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[5] Univ Houston, Elect & Comp Engn Dept, Houston, TX 77004 USA
[6] Univ Houston, Comp Sci Dept, Houston, TX 77004 USA
关键词
Federated learning (FL); hierarchical FL; Internet of Things (IoT); split learning; NETWORKS;
D O I
10.1109/JIOT.2023.3315673
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In contrast to methods relying on a centralized training, emerging Internet of Things (IoT) applications can employ federated learning (FL) to train a variety of models for performance improvement and improved privacy preservation. FL calls for the distributed training of local models at end-devices, which uses a lot of processing power (i.e., CPU cycles/sec). Most end-devices have computing power limitations, such as IoT temperature sensors. One solution for this problem is split FL. However, split FL has its problems, including a single point of failure, issues with fairness, and a poor convergence rate. We provide a novel framework, called hierarchical split FL (HSFL), to overcome these issues. On grouping, our HSFL framework is built. Partial models are constructed within each group at the devices, with the remaining work done at the edge servers. Each group then performs local aggregation at the edge following the computation of local models. End devices are given access to such an edge aggregated model so they can update their models. For each group, a unique edge aggregated HSFL model is produced by this procedure after a set number of rounds. Shared among edge servers, these edge aggregated HSFL models are then aggregated to produce a global model. Additionally, we propose an optimization problem that takes into account the relative local accuracy (RLA) of devices, transmission latency, transmission energy, and edge servers' compute latency in order to reduce the cost of HSFL. The formulated problem is a mixed-integer nonlinear programming (MINLP) problem and cannot be solved easily. To tackle this challenge, we perform decomposition of the formulated problem to yield subproblems. These subproblems are edge computing resource allocation problem and joint RLA minimization, wireless resource allocation, task offloading, and transmit power allocation subproblem. Due to the convex nature of edge computing, resource allocation is done so utilizing a convex optimizer, as opposed to a block successive upper-bound minimization (BSUM)-based approach for joint RLA minimization, resource allocation, job offloading, and transmit power allocation. Finally, we present the performance evaluation findings for the proposed HSFL scheme.
引用
收藏
页码:268 / 282
页数:15
相关论文
共 50 条
  • [1] HSFL: An Efficient Split Federated Learning Framework via Hierarchical Organization
    Xia, Tengxi
    Deng, Yongheng
    Yue, Sheng
    He, Junyi
    Ren, Ju
    Zhang, Yaoxue
    2022 18TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT (CNSM 2022): INTELLIGENT MANAGEMENT OF DISRUPTIVE NETWORK TECHNOLOGIES AND SERVICES, 2022, : 1 - 9
  • [2] A Joint Communication and Federated Learning Framework for Internet of Things Networks
    Yang, Zhaohui
    Jia, Guangyu
    Chen, Mingzhe
    Lam, Hak-Keung
    Cui, Shuguang
    Poor, H. Vincent
    Wong, Kai-Kit
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [3] Communication and Storage Efficient Federated Split Learning
    Mu, Yujia
    Shen, Cong
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 2976 - 2981
  • [4] Resource Optimized Hierarchical Split Federated Learning for Wireless Networks
    Khan, Latif U.
    Guizani, Mohsen
    Hong, Choong Seon
    2023 CYBER-PHYSICAL SYSTEMS AND INTERNET-OF-THINGS WEEK, CPS-IOT WEEK WORKSHOPS, 2023, : 254 - 259
  • [5] An Efficient and Secure Federated Learning Communication Framework
    Noura, Hassan
    Hariss, Khalil
    20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024, 2024, : 961 - 968
  • [6] A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks
    Chen, Mingzhe
    Yang, Zhaohui
    Saad, Walid
    Yin, Changchuan
    Poor, H. Vincent
    Cui, Shuguang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (01) : 269 - 283
  • [7] Hierarchical federated deep reinforcement learning based joint communication and computation for UAV situation awareness
    Li, Haitao
    Huang, Jiawei
    VEHICULAR COMMUNICATIONS, 2024, 50
  • [8] Accelerating Split Federated Learning Over Wireless Communication Networks
    Xu, Ce
    Li, Jinxuan
    Liu, Yuan
    Ling, Yushi
    Wen, Miaowen
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (06) : 5587 - 5599
  • [9] A Communication-Efficient Hierarchical Federated Learning Framework via Shaping Data Distribution at Edge
    Deng, Yongheng
    Lyu, Feng
    Xia, Tengxi
    Zhou, Yuezhi
    Zhang, Yaoxue
    Ren, Ju
    Yang, Yuanyuan
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (03) : 2600 - 2615
  • [10] An Adaptive Compression and Communication Framework for Wireless Federated Learning
    Yang, Yang
    Dang, Shuping
    Zhang, Zhenrong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 10835 - 10854