Optimizing Model Dissemination for Hierarchical Clustering Learning in Edge Computing

被引:0
作者
Zhang, Long [1 ]
Feng, Gang [1 ]
Qin, Zheng [1 ]
Li, Xiaoqian [1 ]
机构
[1] Univ Elect Sci & Technol China, Natl Key Lab Wireless Commun, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational modeling; Servers; Data models; Computer architecture; Clustering algorithms; Heuristic algorithms; Costs; Hierarchical clustering learning; distributed learning; edge computing; sequential combinatorial MAB; COMMUNICATION; ALGORITHMS;
D O I
10.1109/TCCN.2024.3401753
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Hierarchical clustering learning (HCL) extends traditional parameter server-based distributed learning by clustering heterogeneous user equipments (UEs) via cluster nodes (CNs) located at the edge of the network. Currently, most vanilla model dissemination strategies in distributed learning rely on one-to-many transmissions, inevitably consuming excessive precious bandwidth resources. Consequently, communication-efficiency becomes crucial for HCL in resource-constrained edge networks. In this paper, we propose a multistage cooperative model dissemination strategy to sequentially determine the subsets of CNs that can concurrently transmit models during individual scheduling stages, thereby improving communication efficiency in HCL. We formulate the strategy design as an optimization problem to minimize the maximum completion time of the slowest straggler in communication rounds, while accurately clustering UEs to CNs with similar data distributions. To make sequential and combinatorial decisions in individual stages, we develop an online learning algorithm, called sequential combinatorial multi-armed bandit (SCMAB). The SCMAB enables learning a multistage cooperative model dissemination strategy via an asymptotically optimal approach. Furthermore, the SCMAB dynamically re-clusters UEs to appropriate CNs, according to the similarity of UEs' data distribution. The simulation results indicate that compared to traditional transmission strategies, the proposed strategy improves communication efficiency by 2.11% to 5.57%, while achieving comparable and even higher learning accuracy.
引用
收藏
页码:2397 / 2411
页数:15
相关论文
共 37 条
  • [1] Decentralized Aggregation for Energy-Efficient Federated Learning via D2D Communications
    Al-Abiad, Mohammed S.
    Obeed, Mohanad
    Hossain, Md. Jahangir
    Chaaban, Anas
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (06) : 3333 - 3351
  • [2] Finite-time analysis of the multiarmed bandit problem
    Auer, P
    Cesa-Bianchi, N
    Fischer, P
    [J]. MACHINE LEARNING, 2002, 47 (2-3) : 235 - 256
  • [3] Toward On-Device Federated Learning: A Direct Acyclic Graph-Based Blockchain Approach
    Cao, Mingrui
    Zhang, Long
    Cao, Bin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (04) : 2028 - 2042
  • [4] Chen W., 2013, P 30 INT C MACH LEAR, P151
  • [5] Chen W, 2016, ADV NEUR IN, V29
  • [6] Distributed Artificial Intelligence Empowered by End-Edge-Cloud Computing: A Survey
    Duan, Sijing
    Wang, Dan
    Ren, Ju
    Lyu, Feng
    Zhang, Ye
    Wu, Huaqing
    Shen, Xuemin
    [J]. IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2023, 25 (01): : 591 - 624
  • [7] Mobility-Aware Cluster Federated Learning in Hierarchical Wireless Networks
    Feng, Chenyuan
    Yang, Howard H.
    Hu, Deshun
    Zhao, Zhiwei
    Quek, Tony Q. S.
    Min, Geyong
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (10) : 8441 - 8458
  • [8] Gai Y., 2010, P IEEE S NEW FRONT D, P1
  • [9] Gan G, 2007, ASA SIAM SER STAT AP, V20, P1, DOI 10.1137/1.9780898718348
  • [10] An Efficient Framework for Clustered Federated Learning
    Ghosh, Avishek
    Chung, Jichan
    Yin, Dong
    Ramchandran, Kannan
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (12) : 8076 - 8091