ARFL: Adaptive and Robust Federated Learning

被引:4
作者
Uddin, Md Palash [1 ]
Xiang, Yong [1 ]
Cai, Borui [1 ]
Lu, Xuequan [2 ]
Yearwood, John [1 ]
Gao, Longxiang [3 ,4 ]
机构
[1] Deakin Univ, Sch Informat Technol, Geelong, Vic 3220, Australia
[2] La Trobe Univ, Bundoora, Vic 3086, Australia
[3] Qilu Univ Technol, Shandong Acad Sci, Jinan 250316, Shandong, Peoples R China
[4] Nat Supercomp Ctr Jinan, Shandong Comp Sci Ctr, Jinan 250101, Shandong, Peoples R China
基金
澳大利亚研究理事会;
关键词
Distributed learning; federated learning; parallel optimization; communication overhead; adaptive workload; adaptive step size; proximal term; robust aggregation;
D O I
10.1109/TMC.2023.3310248
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a machine learning technique that enables multiple local clients holding individual datasets to collaboratively train a model, without exchanging the clients' datasets. Conventional FL approaches often assign a fixed workload (local epoch) and step size (learning rate) to the clients during the client-side local model training and utilize all collaborating trained models' parameters evenly during the server-side global model aggregation. Consequently, they frequently experience problems with data heterogeneity and high communication costs. In this paper, we propose a novel FL approach to mitigate the above problems. On the client side, we propose an adaptive model update approach that optimally allocates a needful number of local epochs and dynamically adjusts the learning rate to train the local model and regularizes the conventional objective function by adding a proximal term to it. On the server side, we propose a robust model aggregation strategy that potentially supplants the local outlier updates (models' weights) prior to the aggregation. We provide the theoretical convergence results and perform extensive experiments on different data setups over the MNIST, CIFAR-10, and Shakespeare datasets, which manifest that our FL scheme surpasses the baselines in terms of communication speedup, test-set performance, and global convergence.
引用
收藏
页码:5401 / 5417
页数:17
相关论文
共 42 条
  • [31] Mutual Information Driven Federated Learning
    Uddin, Md Palash
    Xiang, Yong
    Lu, Xuequan
    Yearwood, John
    Gao, Longxiang
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (07) : 1526 - 1538
  • [32] W. A. Group, 2018, Tech. Rep.
  • [33] Wang H., 2020, PROC INT C LEARN REP, P1
  • [34] Adaptive Federated Learning in Resource Constrained Edge Computing Systems
    Wang, Shiqiang
    Tuor, Tiffany
    Salonidis, Theodoros
    Leung, Kin K.
    Makaya, Christian
    He, Ting
    Chan, Kevin
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (06) : 1205 - 1221
  • [35] User-Level Privacy-Preserving Federated Learning: Analysis and Performance Optimization
    Wei, Kang
    Li, Jun
    Ding, Ming
    Ma, Chuan
    Su, Hang
    Zhang, Bo
    Poor, H. Vincent
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (09) : 3388 - 3401
  • [36] FedHome: Cloud-Edge Based Personalized Federated Learning for In-Home Health Monitoring
    Wu, Qiong
    Chen, Xu
    Zhou, Zhi
    Zhang, Junshan
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (08) : 2818 - 2832
  • [37] Xiang W. T., 2022, Open Review
  • [38] FedHAR: Semi-Supervised Online Learning for Personalized Federated Human Activity Recognition
    Yu, Hongzheng
    Chen, Zekai
    Zhang, Xiao
    Chen, Xu
    Zhuang, Fuzhen
    Xiong, Hui
    Cheng, Xiuzhen
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (06) : 3318 - 3332
  • [39] Yurochkin M, 2019, PR MACH LEARN RES, V97
  • [40] Zhang SX, 2015, ADV NEUR IN, V28