A Hybrid Federated Learning Architecture With Online Learning and Model Compression

被引:0
作者
Odeyomi, Olusola T. [1 ]
Ajibuwa, Opeyemi [1 ]
Roy, Kaushik [1 ]
机构
[1] North Carolina Agr & Tech State Univ, Dept Comp Sci, Greensboro, NC 27411 USA
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Federated learning; Quantization (signal); Servers; Computer architecture; Signal processing algorithms; Accuracy; Computational modeling; Data models; Convergence; Training; Bandit; compression; federated learning; graph theory; online learning; CHALLENGES;
D O I
10.1109/ACCESS.2024.3517710
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning allows distributed devices to jointly train a global model without breaching privacy. Conventional federated learning uses the cross-device setting or the cross-silo setting. However, this work focuses on hybrid architecture that combines the cross-device and cross-silo settings. The hybrid architecture overcomes congestion at the central server in cross-device settings. Also, it overcomes convergence instability in cross-silo settings. Furthermore, this work proposes two online federated learning algorithms that can work well for real-time applications, unlike many existing federated learning algorithms. The first algorithm named Online Federated Learning with Compression (OFedCom) is designed for the full information settings where the time-varying loss function is observable and the loss gradients can be computed. The second algorithm named Online Federated Learning with Compression and Bandit Feedback (OFedCom-B) is designed for the bandit setting where the time-varying loss function is not observable and the loss gradient cannot be computed. Two compression techniques are incorporated into the proposed algorithms to overcome communication bottlenecks while guaranteeing good convergence. Separate regret analyses are designed for both convex and non-convex time-varying loss functions. The simulation results show faster convergence and better regret bound than existing algorithms.
引用
收藏
页码:191046 / 191058
页数:13
相关论文
共 58 条
  • [1] McMahan H.B., Moore E., Ramage D., Hampson S., Arcas B.A.Y., Communication-efficient learning of deep networks from decentralized data, Proc. Artif. Intell. Statist., pp. 1273-1282, (2017)
  • [2] Kairouz P., Et al., Advances and open problems in federated learning, Found. Trends Mach. Learn., 14, 1-2, (2021)
  • [3] Su Z., Wang Y., Luan T.H., Zhang N., Li F., Chen T., Cao H., Secure and efficient federated learning for smart grid with edge-cloud collaboration, IEEE Trans. Ind. Informat., 18, 2, pp. 1333-1344
  • [4] Antunes R.S., Andre da Costa C., Kuderle A., Yari I.A., Eskofier B., Federated learning for healthcare: Systematic review and architecture proposal, ACM Trans. Intell. Syst. Technol., 13, 4, pp. 1-23, (2022)
  • [5] Long G., Tan Y., Jiang J., Zhang C., Federated learning for open banking, Federated Learning: Privacy and Incentive, pp. 240-254, (2020)
  • [6] Odeyomi O., Tankard E., Rawat D., Differentially private federated learning with stragglers’ delays in cross-silo settings: An online mirror descent approach, IEEE Trans. Cognit. Commun. Netw., 10, 1, pp. 308-321, (2024)
  • [7] Odeyomi O., Zaruba G., Differentially-private federated learning with long-term constraints using online mirror descent, Proc. IEEE Int. Symp. Inf. Theory (ISIT), 2021, pp. 1308-1313
  • [8] Lalitha A., Kilinc O.C., Javidi T., Koushanfar F., Peer-to-peer federated learning on graphs, (2019)
  • [9] Wink T., Nochta Z., An approach for peer-to-peer federated learning, Proc. 51st Annu. IEEE/IFIP Int. Conf. Dependable Syst. Netw. Workshops (DSN-W), pp. 150-157, (2021)
  • [10] Wang X., Lalitha A., Javidi T., Koushanfar F., Peer-to-peer variational federated learning over arbitrary graphs, IEEE J. Sel. Areas Inf. Theory, 3, 2, pp. 172-182, (2022)