Heterogeneity-Aware Memory Efficient Federated Learning via Progressive Layer Freezing

被引:0
|
作者
Wu, Yebo [1 ]
Li, Li [1 ]
Tian, Chunlin [1 ]
Chang, Tao [2 ]
Lin, Chi [3 ]
Wang, Cong [4 ]
Xu, Cheng-Zhong [1 ]
机构
[1] Univ Macau, State Key Lab IoTSC, Taipa, Macao, Peoples R China
[2] Natl Univ Def Technol, Changsha, Peoples R China
[3] Dalian Univ Technol, Dalian, Peoples R China
[4] Zhejiang Univ, Hangzhou, Peoples R China
关键词
Federated Learning; On-Device Training; Heterogeneous Memory;
D O I
10.1109/IWQoS61813.2024.10682916
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) emerges as a new learning paradigm that enables multiple devices to collaboratively train a shared model while preserving data privacy. However, intensive memory footprint during the training process severely bottlenecks the deployment of FL on resource-limited mobile devices in real-world cases. Thus, a framework that can effectively reduce the memory footprint while guaranteeing training efficiency and model accuracy is crucial for FL. In this paper, we propose SmartFreeze, a framework that effectively reduces the memory footprint by conducting the training in a progressive manner. Instead of updating the full model in each training round, SmartFreeze divides the shared model into blocks consisting of a specified number of layers. It first trains the front block with a well-designed output module, safely freezes it after convergence, and then triggers the training of the next one. This process iterates until the whole model has been successfully trained. In this way, the backward computation of the frozen blocks and the corresponding memory space for storing the intermediate outputs and gradients are effectively saved. Except for the progressive training framework, SmartFreeze consists of the following two core components: a pace controller and a participant selector. The pace controller is designed to effectively monitor the training progress of each block at runtime and safely freezes them after convergence while the participant selector selects the right devices to participate in the training for each block by jointly considering the memory capacity, the statistical and system heterogeneity. Extensive experiments are conducted to evaluate the effectiveness of SmartFreeze on both simulation and hardware testbeds. The results demonstrate that SmartFreeze effectively reduces average memory usage by up to 82%. Moreover, it simultaneously improves the model accuracy by up to 83.1% and accelerates the training process up to 2.02x.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Heterogeneity-aware fair federated learning
    Li, Xiaoli
    Zhao, Siran
    Chen, Chuan
    Zheng, Zibin
    INFORMATION SCIENCES, 2023, 619 : 968 - 986
  • [2] AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning
    Kim, Young Geun
    Wu, Carole-Jean
    PROCEEDINGS OF 54TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO 2021, 2021, : 183 - 198
  • [3] Heterogeneity-aware device selection for efficient federated edge learning
    Shi, Yiran
    Nie, Jieyan
    Li, Xingwei
    Li, Hui
    International Journal of Intelligent Networks, 2024, 5 : 293 - 301
  • [4] FLASH: Heterogeneity-Aware Federated Learning at Scale
    Yang, Chengxu
    Xu, Mengwei
    Wang, Qipeng
    Chen, Zhenpeng
    Huang, Kang
    Ma, Yun
    Bian, Kaigui
    Huang, Gang
    Liu, Yunxin
    Jin, Xin
    Liu, Xuanzhe
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (01) : 483 - 500
  • [5] FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient Federated Learning
    Kim, Young Geun
    Wu, Carole-Jean
    2022 IEEE INTERNATIONAL SYMPOSIUM ON WORKLOAD CHARACTERIZATION (IISWC 2022), 2022, : 117 - 129
  • [6] HADFL: Heterogeneity-aware Decentralized Federated Learning Framework
    Cao, Jing
    Lian, Zirui
    Liu, Weihong
    Zhu, Zongwei
    Ji, Cheng
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 1 - 6
  • [7] Data Heterogeneity-Aware Personalized Federated Learning for Diagnosis
    Lin, Huiyan
    Li, Heng
    Jin, Haojin
    Yu, Xiangyang
    Yu, Kuai
    Liang, Chenhao
    Fu, Huazhu
    Liu, Jiang
    OPHTHALMIC MEDICAL IMAGE ANALYSIS, OMIA 2024, 2025, 15188 : 53 - 62
  • [8] Federated Learning With Heterogeneity-Aware Probabilistic Synchronous Parallel on Edge
    Zhao, Jianxin
    Han, Rui
    Yang, Yongkai
    Catterall, Benjamin
    Liu, Chi Harold
    Chen, Lydia Y.
    Mortier, Richard
    Crowcroft, Jon
    Wang, Liang
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2022, 15 (02) : 614 - 626
  • [9] HARMONY: Heterogeneity-Aware Hierarchical Management for Federated Learning System
    Tian, Chunlin
    Li, Li
    Shi, Zhan
    Wang, Jun
    Xu, ChengZhong
    2022 55TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2022, : 631 - 645
  • [10] FedDM: Data and Model Heterogeneity-Aware Federated Learning via Dynamic Weight Sharing
    Shen, Leming
    Zheng, Yuanqing
    2023 IEEE 43RD INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS, 2023, : 975 - 976