Communication-Efficient Federated Learning With Gradual Layer Freezing

被引:6
|
作者
Malan, Erich [1 ]
Peluso, Valentino [2 ]
Calimera, Andrea [1 ]
Macii, Enrico [2 ]
机构
[1] Politecn Torino, Dept Control & Comp Engn, I-10129 Turin, Italy
[2] Politecn Torino, Interuniv Dept Reg & Urban Studies & Planning, I-10129 Turin, Italy
关键词
Training; Servers; Costs; Synchronization; Optimization; Standards; Data models; Communication; convolutional neural networks (CNNs); federated learning (FL); Internet of Things (IoT); optimization;
D O I
10.1109/LES.2022.3190682
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a collaborative, privacy-preserving method for training deep neural networks at the edge of the Internet of Things (IoT). Despite the many advantages, existing FL implementations suffer high communication costs that prevent adoption at scale. Specifically, the frequent model updates between the central server and the many end nodes are a source of channel congestion and high energy consumption. This letter tackles this aspect by introducing federated learning with gradual layer freezing (FedGLF), a novel FL scheme that gradually reduces the portion of the model sent back and forth, relieving the communication bundle yet preserving the quality of the training service. The results collected on two image classification tasks learned with different data distributions prove that FedGLF outperforms conventional FL schemes, with data volume savings ranging from 14% to 59% or up to 2.5% higher accuracy.
引用
收藏
页码:25 / 28
页数:4
相关论文
共 50 条
  • [1] Communication-Efficient Federated Learning with Adaptive Parameter Freezing
    Chen, Chen
    Xu, Hong
    Wang, Wei
    Li, Baochun
    Li, Bo
    Chen, Li
    Zhang, Gong
    2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, : 1 - 11
  • [2] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [3] FedSL: A Communication-Efficient Federated Learning With Split Layer Aggregation
    Zhang, Weishan
    Zhou, Tao
    Lu, Qinghua
    Yuan, Yong
    Tolba, Amr
    Said, Wael
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 15587 - 15601
  • [4] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    ALGORITHMS, 2022, 15 (08)
  • [5] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [6] GWPF: Communication-efficient federated learning with Gradient-Wise Parameter Freezing
    Yang, Duo
    Gao, Yunqi
    Hu, Bing
    Jin, A-Long
    Wang, Wei
    You, Yang
    COMPUTER NETWORKS, 2024, 255
  • [7] A Layer Selection Optimizer for Communication-Efficient Decentralized Federated Deep Learning
    Barbieri, Luca
    Savazzi, Stefano
    Nicoli, Monica
    IEEE ACCESS, 2023, 11 : 22155 - 22173
  • [8] Layer-Based Communication-Efficient Federated Learning with Privacy Preservation
    Lian, Zhuotao
    Wang, Weizheng
    Huang, Huakun
    Su, Chunhua
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (02) : 256 - 263
  • [9] Communication-Efficient Model Aggregation With Layer Divergence Feedback in Federated Learning
    Wang, Liwei
    Li, Jun
    Chen, Wen
    Wu, Qingqing
    Ding, Ming
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (10) : 2293 - 2297
  • [10] Communication-Efficient Federated Learning with Heterogeneous Devices
    Chen, Zhixiong
    Yi, Wenqiang
    Liu, Yuanwei
    Nallanathan, Arumugam
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3602 - 3607