Two-stage model fusion scheme based on knowledge distillation for stragglers in federated learning

被引:0
作者
Xu, Jiuyun [1 ]
Li, Xiaowen [1 ]
Zhu, Kongshang [1 ]
Zhou, Liang [1 ]
Zhao, Yingzhi [1 ]
机构
[1] China Univ Petr East China, Qingdao Inst Software, Coll Comp Sci & Technol, 66 Changjiang West Rd, Qingdao 266580, Peoples R China
关键词
Federated learning; Straggler problem; Knowledge distillation; Heterogeneity; Training efficiency;
D O I
10.1007/s13042-024-02436-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning, as an emerging distributed learning paradigm, enables devices (also called clients) storing local data to collaboratively participate in a training task without the data leaving the devices, aiming to achieve the effect of integrating multiparty data while meeting privacy protection requirements. However, the participating clients are autonomous entities in a real-world environment, with heterogeneity and network instability, which leads to FL being plagued by stragglers when intermediate training results are synchronously interacted. To this end, this paper proposes a new FL scheme with a two-stage fusion process based on knowledge distillation, which transfers knowledge of straggler models to the global model without delaying the training speed, thus balancing efficiency and model performance. We have evaluated the proposed algorithm on three popular datasets. The experimental results show that FedTd improves training efficiency and maintains good model accuracy compared to baseline methods under heterogeneous conditions, exhibiting strong robustness against stragglers. By our approach, the running time can be accelerated by 1.97-3.32x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.32\times$$\end{document} under scenarios with higher level of data heterogeneity.
引用
收藏
页码:3067 / 3083
页数:17
相关论文
共 50 条
  • [41] Knowledge Distillation Assisted Robust Federated Learning: Towards Edge Intelligence
    Qiao, Yu
    Adhikary, Apurba
    Kim, Ki Tae
    Zhang, Chaoning
    Hong, Choong Seon
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 843 - 848
  • [42] Parameterized data-free knowledge distillation for heterogeneous federated learning
    Guo, Cheng
    He, Qianqian
    Tang, Xinyu
    Liu, Yining
    Jie, Yingmo
    KNOWLEDGE-BASED SYSTEMS, 2025, 317
  • [43] Prototype-Decomposed Knowledge Distillation for Learning Generalized Federated Representation
    Wu, Aming
    Yu, Jiaping
    Wang, Yuxuan
    Deng, Cheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 10991 - 11002
  • [44] PFL-DKD: Modeling decoupled knowledge fusion with distillation for improving personalized federated learning
    Ge, Huanhuan
    Pokhrel, Shiva Raj
    Liu, Zhenyu
    Wang, Jinlong
    Li, Gang
    COMPUTER NETWORKS, 2024, 254
  • [45] DECENTRALIZED FEDERATED LEARNING VIA MUTUAL KNOWLEDGE DISTILLATION
    Huang, Yue
    Kong, Lanju
    Li, Qingzhong
    Zhang, Baochen
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 342 - 347
  • [46] Group-Based Federated Knowledge Distillation Intrusion Detection
    Gao, Tiaokang
    Jin, Xiaoning
    INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND KNOWLEDGE ENGINEERING, 2024, 34 (08) : 1251 - 1279
  • [47] A Two-Stage Federated Learning Framework for Class Imbalance in Aerial Scene Classification
    Lv, Zhengpeng
    Zhuang, Yihong
    Yang, Gang
    Huang, Yue
    Ding, Xinghao
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IV, 2024, 14428 : 430 - 441
  • [48] Towards Communication-Efficient Federated Learning Through Particle Swarm Optimization and Knowledge Distillation
    Zaman, Saika
    Talukder, Sajedul
    Hossain, Md Zarif
    Puppala, Sai Mani Teja
    Imteaj, Ahmed
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 510 - 518
  • [49] Feature fusion-based collaborative learning for knowledge distillation
    Li, Yiting
    Sun, Liyuan
    Gou, Jianping
    Du, Lan
    Ou, Weihua
    INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS, 2021, 17 (11)
  • [50] Incentive Mechanism Design for Federated Learning: A Two-stage Stackelberg Game Approach
    Xiao, Guiliang
    Xiao, Mingjun
    Gao, Guoju
    Zhang, Sheng
    Zhao, Hui
    Zou, Xiang
    2020 IEEE 26TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2020, : 148 - 155