Two-stage model fusion scheme based on knowledge distillation for stragglers in federated learning

被引:0
作者
Xu, Jiuyun [1 ]
Li, Xiaowen [1 ]
Zhu, Kongshang [1 ]
Zhou, Liang [1 ]
Zhao, Yingzhi [1 ]
机构
[1] China Univ Petr East China, Qingdao Inst Software, Coll Comp Sci & Technol, 66 Changjiang West Rd, Qingdao 266580, Peoples R China
关键词
Federated learning; Straggler problem; Knowledge distillation; Heterogeneity; Training efficiency;
D O I
10.1007/s13042-024-02436-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning, as an emerging distributed learning paradigm, enables devices (also called clients) storing local data to collaboratively participate in a training task without the data leaving the devices, aiming to achieve the effect of integrating multiparty data while meeting privacy protection requirements. However, the participating clients are autonomous entities in a real-world environment, with heterogeneity and network instability, which leads to FL being plagued by stragglers when intermediate training results are synchronously interacted. To this end, this paper proposes a new FL scheme with a two-stage fusion process based on knowledge distillation, which transfers knowledge of straggler models to the global model without delaying the training speed, thus balancing efficiency and model performance. We have evaluated the proposed algorithm on three popular datasets. The experimental results show that FedTd improves training efficiency and maintains good model accuracy compared to baseline methods under heterogeneous conditions, exhibiting strong robustness against stragglers. By our approach, the running time can be accelerated by 1.97-3.32x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.32\times$$\end{document} under scenarios with higher level of data heterogeneity.
引用
收藏
页码:3067 / 3083
页数:17
相关论文
共 50 条
  • [31] Fedadkd:heterogeneous federated learning via adaptive knowledge distillation
    Song, Yalin
    Liu, Hang
    Zhao, Shuai
    Jin, Haozhe
    Yu, Junyang
    Liu, Yanhong
    Zhai, Rui
    Wang, Longge
    PATTERN ANALYSIS AND APPLICATIONS, 2024, 27 (04)
  • [32] Mitigation of Membership Inference Attack by Knowledge Distillation on Federated Learning
    Ueda, Rei
    Nakai, Tsunato
    Yoshida, Kota
    Fujino, Takeshi
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2025, E108A (03) : 267 - 279
  • [33] WHEN FEDERATED LEARNING MEETS KNOWLEDGE DISTILLATION
    Pang, Xiaoyi
    Hu, Jiahui
    Sun, Peng
    Ren, Ju
    Wang, Zhibo
    IEEE WIRELESS COMMUNICATIONS, 2024, 31 (05) : 208 - 214
  • [34] Two-Stage Edge-Side Fault Diagnosis Method Based on Double Knowledge Distillation
    Yang, Yang
    Long, Yuhan
    Lin, Yijing
    Gao, Zhipeng
    Rui, Lanlan
    Yu, Peng
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (03): : 3623 - 3651
  • [35] Personalized Federated Learning Method Based on Collation Game and Knowledge Distillation
    Sun Y.
    Shi Y.
    Li M.
    Yang R.
    Si P.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2023, 45 (10): : 3702 - 3709
  • [36] A two-stage federated learning method for personalization via selective collaboration
    Xu, Jiuyun
    Zhou, Liang
    Zhao, Yingzhi
    Li, Xiaowen
    Zhu, Kongshang
    Xu, Xiangrui
    Duan, Qiang
    Zhang, Ruru
    COMPUTER COMMUNICATIONS, 2025, 232
  • [37] FedUA: An Uncertainty-Aware Distillation-Based Federated Learning Scheme for Image Classification
    Lee, Shao-Ming
    Wu, Ja-Ling
    INFORMATION, 2023, 14 (04)
  • [38] Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
    Yang, Ze
    Shou, Linjun
    Gong, Ming
    Lin, Wutao
    Jiang, Daxin
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 690 - 698
  • [39] Model-based Federated Reinforcement Distillation
    Ryu, Sefutsu
    Takamaeda-Yamazaki, Shinya
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1109 - 1114
  • [40] FedDK: Improving Cyclic Knowledge Distillation for Personalized Healthcare Federated Learning
    Xu, Yikai
    Fan, Hongbo
    IEEE ACCESS, 2023, 11 : 72409 - 72417