HARMONY: Heterogeneity-Aware Hierarchical Management for Federated Learning System

被引:4
|
作者
Tian, Chunlin [1 ]
Li, Li [1 ]
Shi, Zhan [2 ]
Wang, Jun [3 ]
Xu, ChengZhong [1 ]
机构
[1] Univ Macau, IOTSC, Zurich, Switzerland
[2] Univ Texas Austin, Austin, TX 78712 USA
[3] Futurewei Technol, Santa Clara, CA USA
关键词
Federated learning; heterogeneous systems; mobile device;
D O I
10.1109/MICRO56248.2022.00049
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) enables multiple devices to collaboratively train a shared model while preserving data privacy. However, despite its emerging applications in many areas, real-world deployment of on-device FL is challenging due to wildly diverse training capability and data distribution across heterogeneous edge devices, which highly impact both model performance and training efficiency. This paper proposes Harmony, a high-performance FL framework with heterogeneity-aware hierarchical management of training devices and training data. Unlike previous work that mainly focuses on heterogeneity in either training capability or data distribution, Harmony adopts a hierarchical structure to jointly handle both heterogeneities in a unified 'lawmen Specifically, the two core components of Harmony are a global coordinator hosted by the central server and a local coordinator deployed on each participating device. Without accessing the raw data, the global coordinator first selects the participants, and then further reorganizes their training samples based on the accurate estimation of the runtime training capability and data distribution of each device. The local coordinator keeps monitoring the local training status and conducts efficient training with guidance from the global coordinator. We conduct extensive experiments to evaluate Harmony using both hardware and simulation testbeds on representative datasets. The experimental results show that Harmony improves the accuracy performance by 1.67% - 27.62%. In addition, Harmony effectively accelerates the training process up to 3.29x and 1.84x on average, and saves energy up to 88.41% and 28.04% on average.
引用
收藏
页码:631 / 645
页数:15
相关论文
共 50 条
  • [31] Mobility-aware Device Sampling for Statistical Heterogeneity in Hierarchical Federated Learning
    Zhang, Songli
    Zheng, Zhenzhe
    Li, Qinya
    Wu, Fan
    Chen, Guihai
    2024 IEEE 44TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS 2024, 2024, : 656 - 667
  • [32] Personalized Heterogeneity-aware Federated Search Towards Better Accuracy and Energy Efficiency
    Yang, Zhao
    Sun, Qingshuang
    2022 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD, 2022,
  • [33] Heterogeneity-aware Cross-school Electives Recommendation: a Hybrid Federated Approach
    Ju, Chengyi
    Cao, Jiannong
    Yang, Yu
    Yang, Zhen-Qun
    Lee, Ho Man
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 1500 - 1508
  • [34] Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads
    Narayanan, Deepak
    Santhanam, Keshav
    Kazhamiaka, Fiodar
    Phanishayee, Amar
    Zaharia, Matei
    PROCEEDINGS OF THE 14TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION (OSDI '20), 2020, : 481 - 498
  • [35] Heterogeneity-aware Deep Learning Workload Deployments on the Computing Continuum
    Bouvier, Thomas
    2021 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2021, : 1027 - 1027
  • [36] Hop: Heterogeneity-aware Decentralized Training
    Luo, Qinyi
    Lin, Jinkun
    Zhuo, Youwei
    Qian, Xuehai
    TWENTY-FOURTH INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXIV), 2019, : 893 - 907
  • [37] A Heterogeneity-Aware Task Scheduler for Spark
    Xu, Luna
    Butt, Ali R.
    Lim, Seung-Hwan
    Kannan, Ramakrishnan
    2018 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER), 2018, : 245 - 256
  • [38] Delay-Aware Hierarchical Federated Learning
    Lin, Frank Po-Chen
    Hosseinalipour, Seyyedali
    Michelusi, Nicolo
    Brinton, Christopher G.
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (02) : 674 - 688
  • [39] Petrel: Heterogeneity-Aware Distributed Deep Learning Via Hybrid Synchronization
    Zhou, Qihua
    Guo, Song
    Qu, Zhihao
    Li, Peng
    Li, Li
    Guo, Minyi
    Wang, Kun
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (05) : 1030 - 1043
  • [40] Heterogeneity-aware Distributed Parameter Servers
    Jiang, Jiawei
    Cui, Bin
    Zhang, Ce
    Yu, Lele
    SIGMOD'17: PROCEEDINGS OF THE 2017 ACM INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2017, : 463 - 478