Efficient Wireless Federated Learning with Adaptive Model Pruning

被引:0
作者
Chen, Zhixiong [1 ]
Yi, Wenqiang [1 ]
Lambotharan, Sangarapillai [2 ]
Nallanathan, Arumugam [1 ]
机构
[1] Queen Mary Univ London, London, England
[2] Loughborough Univ, London, England
来源
IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM | 2023年
关键词
Model pruning; federated Learning; resource management;
D O I
10.1109/GLOBECOM54140.2023.10437211
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
For wireless federated learning (FL), this work proposes an adaptive model pruning-based FL (AMP-FL) framework, where the edge server dynamically generates sub-models by pruning the global model to adapt devices' heterogeneous computation capabilities and time-varying wireless channel conditions. To mitigate the negative effect of different structures of sub-models on learning convergence, this work designs a new compensating strategy for the pruned regions of sub-models via historical gradients. Since the freshness of gradients dominates the convergence speed, this work also defines an age of information (AoI) metric to characterize the staleness of the regions of the local gradients. Based on the compensating strategy, we formulate a joint device scheduling, model pruning, and resource block allocation optimization problem to minimize the average AoI for local gradients. To solve this problem, we theoretically derive an optimal model pruning scheme. After that, we transform the original problem into equivalent linear programming that can be solved with polynomial time complexity. Simulation results on the CIFAR-10 dataset show that the proposed AMP-FL outperforms the benchmark schemes with faster convergence speed and over 7% learning accuracy improvement.
引用
收藏
页码:7592 / 7597
页数:6
相关论文
共 15 条
[1]   Federated Learning Over Wireless Fading Channels [J].
Amiri, Mohammad Mohammadi ;
Gunduz, Deniz .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (05) :3546-3557
[2]  
[Anonymous], 2017, INT C LEARNING REPRE
[3]   Convergence Time Optimization for Federated Learning Over Wireless Networks [J].
Chen, Mingzhe ;
Poor, H. Vincent ;
Saad, Walid ;
Cui, Shuguang .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (04) :2457-2471
[4]  
Chen Z., 2022, ARXIV220409746
[5]   Incipient Interturn Short-Circuit Fault Diagnosis of Permanent Magnet Synchronous Motors Based on the Data-Driven Digital Twin Model [J].
Chen, Zhichao ;
Liang, Deliang ;
Jia, Shaofeng ;
Yang, Lin ;
Yang, Shuzhou .
IEEE JOURNAL OF EMERGING AND SELECTED TOPICS IN POWER ELECTRONICS, 2023, 11 (03) :3514-3524
[6]   Knowledge-Aided Federated Learning for Energy-Limited Wireless Networks [J].
Chen, Zhixiong ;
Yi, Wenqiang ;
Liu, Yuanwei ;
Nallanathan, Arumugam .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (06) :3368-3386
[7]   Dynamic Task Software Caching-Assisted Computation Offloading for Multi-Access Edge Computing [J].
Chen, Zhixiong ;
Yi, Wenqiang ;
Alam, Atm S. ;
Nallanathan, Arumugam .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (10) :6950-6965
[8]   Solving Linear Programs in the Current Matrix Multiplication Time [J].
Cohen, Michael B. ;
Lee, Yin Tat ;
Song, Zhao .
JOURNAL OF THE ACM, 2021, 68 (01)
[9]  
Diao Enmao, 2020, arXiv, V1
[10]   HeteroSAg: Secure Aggregation With Heterogeneous Quantization in Federated Learning [J].
Elkordy, Ahmed Roushdy ;
Avestimehr, A. Salman .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (04) :2372-2386