A Joint Communication and Learning Framework for Hierarchical Split Federated Learning

被引:8
|
作者
Khan, Latif U. [1 ]
Guizani, Mohsen [1 ]
Al-Fuqaha, Ala [2 ]
Hong, Choong Seon [3 ]
Niyato, Dusit [4 ]
Han, Zhu [3 ,5 ,6 ]
机构
[1] Mohamed Bin Zayed Univ Artificial Intelligence, Machine Learning Dept, Abu Dhabi, U Arab Emirates
[2] Hamad Bin Khalifa Univ, Coll Engn & Appl Sci, Comp Sci Dept, Doha, Qatar
[3] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, South Korea
[4] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[5] Univ Houston, Elect & Comp Engn Dept, Houston, TX 77004 USA
[6] Univ Houston, Comp Sci Dept, Houston, TX 77004 USA
关键词
Federated learning (FL); hierarchical FL; Internet of Things (IoT); split learning; NETWORKS;
D O I
10.1109/JIOT.2023.3315673
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In contrast to methods relying on a centralized training, emerging Internet of Things (IoT) applications can employ federated learning (FL) to train a variety of models for performance improvement and improved privacy preservation. FL calls for the distributed training of local models at end-devices, which uses a lot of processing power (i.e., CPU cycles/sec). Most end-devices have computing power limitations, such as IoT temperature sensors. One solution for this problem is split FL. However, split FL has its problems, including a single point of failure, issues with fairness, and a poor convergence rate. We provide a novel framework, called hierarchical split FL (HSFL), to overcome these issues. On grouping, our HSFL framework is built. Partial models are constructed within each group at the devices, with the remaining work done at the edge servers. Each group then performs local aggregation at the edge following the computation of local models. End devices are given access to such an edge aggregated model so they can update their models. For each group, a unique edge aggregated HSFL model is produced by this procedure after a set number of rounds. Shared among edge servers, these edge aggregated HSFL models are then aggregated to produce a global model. Additionally, we propose an optimization problem that takes into account the relative local accuracy (RLA) of devices, transmission latency, transmission energy, and edge servers' compute latency in order to reduce the cost of HSFL. The formulated problem is a mixed-integer nonlinear programming (MINLP) problem and cannot be solved easily. To tackle this challenge, we perform decomposition of the formulated problem to yield subproblems. These subproblems are edge computing resource allocation problem and joint RLA minimization, wireless resource allocation, task offloading, and transmit power allocation subproblem. Due to the convex nature of edge computing, resource allocation is done so utilizing a convex optimizer, as opposed to a block successive upper-bound minimization (BSUM)-based approach for joint RLA minimization, resource allocation, job offloading, and transmit power allocation. Finally, we present the performance evaluation findings for the proposed HSFL scheme.
引用
收藏
页码:268 / 282
页数:15
相关论文
共 50 条
  • [21] A Joint Client-Server Watermarking Framework for Federated Learning
    Fang, Shufen
    Gai, Keke
    Yu, Jing
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2024, 2024, 14887 : 424 - 436
  • [22] SplitFed: When Federated Learning Meets Split Learning
    Thapa, Chandra
    Arachchige, Pathum Chamikara Mahawaga
    Camtepe, Seyit
    Sun, Lichao
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8485 - 8493
  • [23] GossipFL: A Decentralized Federated Learning Framework With Sparsified and Adaptive Communication
    Tang, Zhenheng
    Shi, Shaohuai
    Li, Bo
    Chu, Xiaowen
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (03) : 909 - 922
  • [24] SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression
    Li, Zhize
    Zhao, Haoyu
    Li, Boyue
    Chi, Yuejie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [25] A federated deep learning framework for privacy preservation and communication efficiency
    Cao, Tien-Dung
    Tram, Truong-Huu
    Tran, Hien
    Tran, Khanh
    JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 124
  • [26] An Efficient Federated Learning Framework for Training Semantic Communication Systems
    Nguyen, Loc X.
    Le, Huy Q.
    Tun, Ye Lin
    Aung, Pyae Sone
    Tun, Yan Kyaw
    Han, Zhu
    Hong, Choong Seon
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (10) : 15872 - 15877
  • [27] FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for Resource and Data Heterogeneity
    Li, Guanghao
    Hu, Yue
    Zhang, Miao
    Liu, Ji
    Yin, Quanjun
    Peng, Yong
    Dou, Dejing
    51ST INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2022, 2022,
  • [28] A hierarchical federated learning framework for collaborative quality defect inspection in construction
    Wu, Hai-Tao
    Li, Heng
    Chi, Hung-Lin
    Kou, Wei-Bin
    Wu, Yik-Chung
    Wang, Shuai
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [29] A Hierarchical Asynchronous Federated Learning Privacy-Preserving Framework for IoVs
    Zhou, Rui
    Niu, Xianhua
    Xiong, Ling
    Wang, Yangpeng
    Zhao, Yue
    Yu, Kai
    FRONTIERS IN CYBER SECURITY, FCS 2023, 2024, 1992 : 99 - 113
  • [30] Latency Minimization for Split Federated Learning
    Guo, Jie
    Xu, Ce
    Ling, Yushi
    Liu, Yuan
    Yu, Qi
    2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,