Heterogeneity-Aware Memory Efficient Federated Learning via Progressive Layer Freezing

被引:0
|
作者
Wu, Yebo [1 ]
Li, Li [1 ]
Tian, Chunlin [1 ]
Chang, Tao [2 ]
Lin, Chi [3 ]
Wang, Cong [4 ]
Xu, Cheng-Zhong [1 ]
机构
[1] Univ Macau, State Key Lab IoTSC, Taipa, Macao, Peoples R China
[2] Natl Univ Def Technol, Changsha, Peoples R China
[3] Dalian Univ Technol, Dalian, Peoples R China
[4] Zhejiang Univ, Hangzhou, Peoples R China
关键词
Federated Learning; On-Device Training; Heterogeneous Memory;
D O I
10.1109/IWQoS61813.2024.10682916
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) emerges as a new learning paradigm that enables multiple devices to collaboratively train a shared model while preserving data privacy. However, intensive memory footprint during the training process severely bottlenecks the deployment of FL on resource-limited mobile devices in real-world cases. Thus, a framework that can effectively reduce the memory footprint while guaranteeing training efficiency and model accuracy is crucial for FL. In this paper, we propose SmartFreeze, a framework that effectively reduces the memory footprint by conducting the training in a progressive manner. Instead of updating the full model in each training round, SmartFreeze divides the shared model into blocks consisting of a specified number of layers. It first trains the front block with a well-designed output module, safely freezes it after convergence, and then triggers the training of the next one. This process iterates until the whole model has been successfully trained. In this way, the backward computation of the frozen blocks and the corresponding memory space for storing the intermediate outputs and gradients are effectively saved. Except for the progressive training framework, SmartFreeze consists of the following two core components: a pace controller and a participant selector. The pace controller is designed to effectively monitor the training progress of each block at runtime and safely freezes them after convergence while the participant selector selects the right devices to participate in the training for each block by jointly considering the memory capacity, the statistical and system heterogeneity. Extensive experiments are conducted to evaluate the effectiveness of SmartFreeze on both simulation and hardware testbeds. The results demonstrate that SmartFreeze effectively reduces average memory usage by up to 82%. Moreover, it simultaneously improves the model accuracy by up to 83.1% and accelerates the training process up to 2.02x.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Heterogeneity-Aware Cooperative Federated Edge Learning With Adaptive Computation and Communication Compression
    Zhang, Zhenxiao
    Gao, Zhidong
    Guo, Yuanxiong
    Gong, Yanmin
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (03) : 2073 - 2084
  • [22] A two-phase half-async method for heterogeneity-aware federated learning
    Ma, Tianyi
    Mao, Bingcheng
    Chen, Ming
    NEUROCOMPUTING, 2022, 485 : 134 - 154
  • [23] Petrel: Heterogeneity-Aware Distributed Deep Learning Via Hybrid Synchronization
    Zhou, Qihua
    Guo, Song
    Qu, Zhihao
    Li, Peng
    Li, Li
    Guo, Minyi
    Wang, Kun
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (05) : 1030 - 1043
  • [24] Heterogeneity-aware pruning framework for personalized federated learning in remote sensing scene classification
    Hu, Zhuping
    Gong, Maoguo
    Dong, Zhuowei
    Lu, Yiheng
    Li, Jianzhao
    Zhao, Yue
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [25] Heterogeneity-Aware Distributed Machine Learning Training via Partial Reduce
    Miao, Xupeng
    Nie, Xiaonan
    Shao, Yingxia
    Yang, Zhi
    Jiang, Jiawei
    Ma, Lingxiao
    Cui, Bin
    SIGMOD '21: PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2021, : 2262 - 2270
  • [26] FedCure: A Heterogeneity-Aware Personalized Federated Learning Framework for Intelligent Healthcare Applications in IoMT Environments
    Sachin, D. N.
    Annappa, B.
    Hegde, Saumya
    Abhijit, Chunduru Sri
    Ambesange, Sateesh
    IEEE ACCESS, 2024, 12 : 15867 - 15883
  • [27] FedVisual: Heterogeneity-Aware Model Aggregation for Federated Learning in Visual-Based Vehicular Crowdsensing
    Zhang, Wenjun
    Liu, Xiaoli
    Zhang, Ruoyi
    Zhu, Chao
    Tarkoma, Sasu
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (22): : 36191 - 36202
  • [28] HiFlash: Communication-Efficient Hierarchical Federated Learning With Adaptive Staleness Control and Heterogeneity-Aware Client-Edge Association
    Wu, Qiong
    Chen, Xu
    Ouyang, Tao
    Zhou, Zhi
    Zhang, Xiaoxi
    Yang, Shusen
    Zhang, Junshan
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (05) : 1560 - 1579
  • [29] Joint heterogeneity-aware personalized federated search for energy efficient battery-powered edge computing
    Yang, Zhao
    Zhang, Shengbing
    Li, Chuxi
    Wang, Miao
    Yang, Jiaying
    Zhang, Meng
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 146 : 178 - 194
  • [30] Communication-Efficient Federated Learning With Gradual Layer Freezing
    Malan, Erich
    Peluso, Valentino
    Calimera, Andrea
    Macii, Enrico
    IEEE EMBEDDED SYSTEMS LETTERS, 2023, 15 (01) : 25 - 28