Lazy Aggregation for Heterogeneous Federated Learning

被引:4
作者
Xu, Gang [1 ]
Kong, De-Lun [1 ]
Chen, Xiu-Bo [2 ]
Liu, Xin [3 ]
机构
[1] North China Univ Technol, Sch Informat Sci & Technol, Beijing 100144, Peoples R China
[2] Beijing Univ Posts & Telecommun, Informat Secur Ctr, State Key Lab Networking & Switching Technol, Beijing 100876, Peoples R China
[3] Inner Mongolia Univ Sci & Technol, Sch Informat Engn, Baotou 014010, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 17期
关键词
federated learning; heterogeneous data; lazy aggregation; cross-device momentum;
D O I
10.3390/app12178515
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Federated learning (FL) is a distributed neural network training paradigm with privacy protection. With the premise of ensuring that local data isn't leaked, multi-device cooperation trains the model and improves its normalization. Unlike centralized training, FL is susceptible to heterogeneous data, biased gradient estimations hinder convergence of the global model, and traditional sampling techniques cannot apply FL due to privacy constraints. Therefore, this paper proposes a novel FL framework, federated lazy aggregation (FedLA), which reduces aggregation frequency to obtain high-quality gradients and improve robustness in non-IID. To judge the aggregating timings, the change rate of the models' weight divergence (WDR) is introduced to FL. Furthermore, the collected gradients also facilitate FL walking out of the saddle point without extra communications. The cross-device momentum (CDM) mechanism could significantly improve the upper limit performance of the global model in non-IID. We evaluate the performance of several popular algorithms, including FedLA and FedLA with momentum (FedLAM). The results show that FedLAM achieves the best performance in most scenarios and the performance of the global model can also be improved in IID scenarios.
引用
收藏
页数:14
相关论文
共 33 条
  • [1] Acar D.A.E., 2021, P 9 INT C LEARNING R
  • [2] Arivazhagan M. G., 2019, ARXIV
  • [3] Bucilua Cristian, 2006, P 12 ACM SIGKDD INT, P535
  • [4] Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
  • [5] Doon R., 2018, P 2018 IEEE PUN PUN, P1
  • [6] Duan M., 2020, ARXIV
  • [7] Astraea: Self-balancing Federated Learning for Improving Classification Accuracy of Mobile Deep Learning Applications
    Duan, Moming
    Liu, Duo
    Chen, Xianzhang
    Tan, Yujuan
    Ren, Jinting
    Qiao, Lei
    Liang, Liang
    [J]. 2019 IEEE 37TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2019), 2019, : 246 - 254
  • [8] Fallah A, 2020, ADV NEUR IN, V33
  • [9] Finn C, 2017, PR MACH LEARN RES, V70
  • [10] An Efficient Framework for Clustered Federated Learning
    Ghosh, Avishek
    Chung, Jichan
    Yin, Dong
    Ramchandran, Kannan
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (12) : 8076 - 8091