SAM: An Efficient Approach With Selective Aggregation of Models in Federated Learning

被引:4
作者
Shi, Yuchen [1 ]
Fan, Pingyi [1 ]
Zhu, Zheqi [1 ]
Peng, Chenghui [2 ]
Wang, Fei [2 ]
Letaief, Khaled B. [3 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Huawei Technol, Wireless Technol Lab, Shanghai 201206, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 11期
关键词
Biological system modeling; Training; Convergence; Computational modeling; Internet of Things; Numerical models; Distributed databases; Communication efficiency; federated learning; model selection; network utilization; QUANTIZATION;
D O I
10.1109/JIOT.2024.3373822
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a promising distributed learning mechanism that revolutionizes our interaction with data in the IoT ecosystem. Due to the rapidly growing scale of smart devices and the limited transmission resources of networks, a simple, consistent, and scalable FL framework aiming to address the communication bottleneck is urgently needed. In this work, we propose an efficient approach with selective aggregation of models (SAMs) to mitigate the communication overload in FL systems. The introduction of SAM enables each local client to upload its model with a certain probability, resulting in a significant reduction in costly communication expenses. We design the algorithm for SAM, analyze the convergence bound on nonconvex objectives for heterogeneous data, which illustrates the impact of the selection probability as well as the set size of participating clients on the system performance, and assess the conservation for the network resource utilization by modeling queuing systems. We conduct various experiments to evaluate the performance of SAM, whose outcomes suggest that significant alleviation of the communication bottleneck can be accomplished with marginal cost of performance loss. It will also be shown that SAM is a communication-efficient method that can be freely applied to other frameworks.
引用
收藏
页码:20769 / 20783
页数:15
相关论文
共 42 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] A Survey on Homomorphic Encryption Schemes: Theory and Implementation
    Acar, Abbas
    Aksu, Hidayet
    Uluagac, A. Selcuk
    Conti, Mauro
    [J]. ACM COMPUTING SURVEYS, 2018, 51 (04)
  • [3] Communication Lower Bounds for Statistical Estimation Problems via a Distributed Data Processing Inequality
    Braverman, Mark
    Garg, Ankit
    Ma, Tengyu
    Nguyen, Huy L.
    Woodruff, David P.
    [J]. STOC'16: PROCEEDINGS OF THE 48TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING, 2016, : 1011 - 1020
  • [4] Caldas S, 2019, Arxiv, DOI arXiv:1812.07210
  • [5] Chen WL, 2021, Arxiv, DOI arXiv:2010.13723
  • [6] Cho Y. J., 2022, PR MACH LEARN RES
  • [7] Haddadpour F, 2019, Arxiv, DOI arXiv:1910.14425
  • [8] Han Y., 2018, C LEARNING THEORY, P3163
  • [9] He CY, 2020, ADV NEUR IN, V33
  • [10] Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning
    Ji, Shaoxiong
    Jiang, Wenqi
    Walid, Anwar
    Li, Xue
    [J]. IEEE INTELLIGENT SYSTEMS, 2022, 37 (02) : 27 - 34