HADFL: Heterogeneity-aware Decentralized Federated Learning Framework

被引:16
|
作者
Cao, Jing [1 ]
Lian, Zirui [1 ]
Liu, Weihong [1 ]
Zhu, Zongwei [1 ]
Ji, Cheng [2 ]
机构
[1] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
[2] Nanjing Univ Sci & Technol, Nanjing, Peoples R China
基金
中国博士后科学基金;
关键词
Distributed Training; Machine Learning; Federated Learning; Heterogeneous Computing;
D O I
10.1109/DAC18074.2021.9586101
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) supports training models on geographically distributed devices. However, traditional FL systems adopt a centralized synchronous strategy, putting high communication pressure and model generalization challenge. Existing optimizations on FL either fail to speedup training on heterogeneous devices or suffer from poor communication efficiency. In this paper, we propose HADFL, a framework that supports decentralized asynchronous training on heterogeneous devices. The devices train model locally with heterogeneity-aware local steps using local data. In each aggregation cycle, they are selected based on probability to perform model synchronization and aggregation. Compared with the traditional FL system, HADFL can relieve the central server's communication pressure, efficiently utilize heterogeneous computing power, and can achieve a maximum speedup of 3.15x than decentralized-FedAvg and 4.68x than Pytorch distributed training scheme, respectively, with almost no loss of convergence accuracy.
引用
收藏
页码:1 / 6
页数:6
相关论文
共 50 条
  • [41] Graph Federated Learning Based on the Decentralized Framework
    Liu, Peilin
    Tang, Yanni
    Zhang, Mingyue
    Chen, Wu
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 452 - 463
  • [42] EdgeFL: A Lightweight Decentralized Federated Learning Framework
    Zhang, Hongyi
    Bosch, Jan
    Olsson, Helena Hohnstrom
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 556 - 561
  • [43] Petrel: Heterogeneity-Aware Distributed Deep Learning Via Hybrid Synchronization
    Zhou, Qihua
    Guo, Song
    Qu, Zhihao
    Li, Peng
    Li, Li
    Guo, Minyi
    Wang, Kun
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (05) : 1030 - 1043
  • [44] Heterogeneity-aware Distributed Parameter Servers
    Jiang, Jiawei
    Cui, Bin
    Zhang, Ce
    Yu, Lele
    SIGMOD'17: PROCEEDINGS OF THE 2017 ACM INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2017, : 463 - 478
  • [45] Heterogeneity-Aware Distributed Machine Learning Training via Partial Reduce
    Miao, Xupeng
    Nie, Xiaonan
    Shao, Yingxia
    Yang, Zhi
    Jiang, Jiawei
    Ma, Lingxiao
    Cui, Bin
    SIGMOD '21: PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2021, : 2262 - 2270
  • [46] HiFlash: Communication-Efficient Hierarchical Federated Learning With Adaptive Staleness Control and Heterogeneity-Aware Client-Edge Association
    Wu, Qiong
    Chen, Xu
    Ouyang, Tao
    Zhou, Zhi
    Zhang, Xiaoxi
    Yang, Shusen
    Zhang, Junshan
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (05) : 1560 - 1579
  • [47] HALO: Heterogeneity-Aware Load Balancing
    Gandhi, Anshul
    Zhang, Xi
    Mittal, Naman
    2015 IEEE 23rd International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 2015), 2015, : 242 - 251
  • [48] Heterogeneity-aware distributed access structure
    Beltrán, AG
    Milligan, P
    Sage, P
    FIFTH IEEE INTERNATIONAL CONFERENCE ON PEER-TO-PEER COMPUTING, PROCEEDINGS, 2005, : 152 - 153
  • [49] Heterogeneity-Aware Data Placement in Hybrid Clouds
    Marquez, Jack D.
    Gonzalez, Juan D.
    Mondragon, Oscar H.
    CLOUD COMPUTING - CLOUD 2019, 2019, 11513 : 177 - 191
  • [50] A decentralized asynchronous federated learning framework for edge devices
    Wang, Bin
    Tian, Zhao
    Ma, Jie
    Zhang, Wenju
    She, Wei
    Liu, Wei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2025, 166