Adaptive Model Pruning for Hierarchical Wireless Federated Learning

被引:0
|
作者
Liu, Xiaonan [1 ]
Wang, Shiqiang [2 ]
Deng, Yansha [3 ]
Nallanathan, Arumugam [1 ]
机构
[1] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London, England
[2] IBM TJ Watson Res Ctr, Yorktown Hts, NY USA
[3] Kings Coll London, Dept Engn, London, England
关键词
Hierarchical Wireless network; federated pruning; machine learning; communication and computation latency;
D O I
10.1109/WCNC57260.2024.10571275
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a promising privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets. Hierarchical FL (HFL), as a device-edge-cloud aggregation hierarchy, can enjoy both the cloud server's access to more datasets and the edge servers' efficient communications with devices. However, the learning latency increases with the HFL network scale due to the increasing number of edge servers and devices with limited local computation capability and communication bandwidth. To address this issue, in this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale. We present the convergence rate of an upper on the l2-norm of gradients for HFL with model pruning, analyze the computation and communication latency of the proposed model pruning scheme, and formulate an optimization problem to maximize the convergence rate under a given latency threshold by jointly optimizing the pruning ratio and wireless resource allocation. By decoupling the optimization problem and using Karush-Kuhn-Tucker (KKT) conditions, closed-form solutions of pruning ratio and wireless resource allocation are derived. Simulation results show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50% communication cost.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Adaptive Idle Model Fusion in Hierarchical Federated Learning for Unbalanced Edge Regions
    Xu, Jiuyun
    Fan, Hanfei
    Wang, Qiqi
    Jiang, Yinyue
    Duan, Qiang
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (05): : 4603 - 4616
  • [22] Federated Learning with User Mobility in Hierarchical Wireless Networks
    Feng, Chenyuan
    Yang, Howard H.
    Hu, Deshun
    Quek, Tony Q. S.
    Zhao, Zhiwei
    Min, Geyong
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [23] FLCAP: Federated Learning with Clustered Adaptive Pruning for Heterogeneous and Scalable Systems
    Miralles, Hugo
    Tosic, Tamara
    Riveill, Michel
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [24] Federated adaptive pruning with differential privacy
    Wang, Zhousheng
    Shen, Jiahe
    Dai, Hua
    Xu, Jian
    Yang, Geng
    Zhou, Hao
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2025, 169
  • [25] An Adaptive Compression and Communication Framework for Wireless Federated Learning
    Yang, Yang
    Dang, Shuping
    Zhang, Zhenrong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 10835 - 10854
  • [26] Adaptive Retransmission Design for Wireless Federated Edge Learning
    XU Xinyi
    LIU Shengli
    YU Guanding
    ZTE Communications, 2023, 21 (01) : 3 - 14
  • [27] Resource Optimized Hierarchical Split Federated Learning for Wireless Networks
    Khan, Latif U.
    Guizani, Mohsen
    Hong, Choong Seon
    2023 CYBER-PHYSICAL SYSTEMS AND INTERNET-OF-THINGS WEEK, CPS-IOT WEEK WORKSHOPS, 2023, : 254 - 259
  • [28] Hierarchical federated learning with local model embedding
    He, Yunlong
    Yan, Dandan
    Chen, Fei
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123
  • [29] Model Pruning Enables Efficient Federated Learning on Edge Devices
    Jiang, Yuang
    Wang, Shiqiang
    Valls, Victor
    Ko, Bong Jun
    Lee, Wei-Han
    Leung, Kin K.
    Tassiulas, Leandros
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 10374 - 10386
  • [30] FedADP: Communication-Efficient by Model Pruning for Federated Learning
    Liu, Haiyang
    Shi, Yuliang
    Su, Zhiyuan
    Zhang, Kun
    Wang, Xinjun
    Yan, Zhongmin
    Kong, Fanyu
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 3093 - 3098