FedMef: Towards Memory-efficient Federated Dynamic Pruning

被引:5
作者
Huang, Hong [1 ]
Zhuang, Weiming [2 ]
Chen, Chen [2 ]
Lyu, Lingjuan [2 ]
机构
[1] City Univ Hong Kong, Hong Kong, Peoples R China
[2] Sony AI, Tokyo, Japan
来源
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2024年
关键词
D O I
10.1109/CVPR52733.2024.02601
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) promotes decentralized training while prioritizing data confidentiality. However, its application on resource-constrained devices is challenging due to the high demand for computation and memory resources to train deep learning models. Neural network pruning techniques, such as dynamic pruning, could enhance model efficiency, but directly adopting them in FL still poses substantial challenges, including post-pruning performance degradation, high activation memory usage, etc. To address these challenges, we propose FedMef, a novel and memory-efficient federated dynamic pruning framework. FedMef comprises two key components. First, we introduce the budget-aware extrusion that maintains pruning efficiency while preserving post-pruning performance by salvaging crucial information from parameters marked for pruning within a given budget. Second, we propose scaled activation pruning to effectively reduce activation memory footprints, which is particularly beneficial for deploying FL to memory-limited devices. Extensive experiments demonstrate the effectiveness of our proposed FedMef. In particular, it achieves a significant reduction of 28.5% in memory footprint compared to state-of-the-art methods while obtaining superior accuracy.
引用
收藏
页码:27538 / 27547
页数:10
相关论文
共 51 条
[1]  
Agarap Abien Fred, 2018, ARXIV
[2]  
[Anonymous], 2019, PMLR
[3]  
Bai Y., 2022, ARXIV
[4]  
Bibikar S, 2022, AAAI CONF ARTIF INTE, P6080
[5]  
Brock Andrew, 2021, ARXIV
[6]  
Chen J., 2022, P MACHINE LEARNING S, V4, P64
[7]  
Chen Jiahui, 2022, Res Sq, DOI 10.21203/rs.3.rs-1362445/v1
[8]  
Chen Jianfei, 2021, P MACHINE LEARNING R, V139
[9]   XGBoost: A Scalable Tree Boosting System [J].
Chen, Tianqi ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :785-794
[10]  
Darlow LN, 2018, ARXIV