DPFed: Toward Fair Personalized Federated Learning with Fast Convergence

被引:2
作者
Wu, Jiang [1 ]
Liu, Xuezheng [1 ]
Liu, Jiahao [1 ]
Hu, Miao [1 ]
Wu, Di [1 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
来源
2022 18TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN | 2022年
关键词
Federated learning; deep learning; deep reinforcement learning;
D O I
10.1109/MSN57253.2022.00087
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Instead of training a single global model to fit the needs of all clients, personalized federated learning aims to train multiple client-specific models to better account for data disparities across participating clients. However, existing solutions suffer from serious unfairness among clients in terms of model accuracy and slow convergence under non-IID data. In this paper, we propose a novel personalized federated learning framework, called DPFed, which employs deep reinforcement learning (DRL) to identify relationship between clients and enable closer collaboration among similar clients. By exploiting such relationships, DPFed can personalize model aggregation for each client and achieve fast convergence. Moreover, by regularizing the reward function of DRL, we can reduce the variance of model accuracy across clients and achieve a higher level of fairness. Finally, we conduct extensive experiments to evaluate the effectiveness of our proposed framework under a variety of datasets and degrees of non-IID data distribution. The results demonstrate that DPFed outperforms other alternatives in terms of convergence speed, model accuracy, and fairness.
引用
收藏
页码:510 / 517
页数:8
相关论文
共 25 条
  • [1] Classic Meets Modern: a Pragmatic Learning-Based Congestion Control for the Internet
    Abbasloo, Soheil
    Yen, Chen-Yu
    Chao, H. Jonathan
    [J]. SIGCOMM '20: PROCEEDINGS OF THE 2020 ANNUAL CONFERENCE OF THE ACM SPECIAL INTEREST GROUP ON DATA COMMUNICATION ON THE APPLICATIONS, TECHNOLOGIES, ARCHITECTURES, AND PROTOCOLS FOR COMPUTER COMMUNICATION, 2020, : 632 - 647
  • [2] Alex K., 2009, LEARNING MULTIPLE LA
  • [3] THE THEORY OF DYNAMIC PROGRAMMING
    BELLMAN, R
    [J]. BULLETIN OF THE AMERICAN MATHEMATICAL SOCIETY, 1954, 60 (06) : 503 - 515
  • [4] Collins L, 2023, Arxiv, DOI [arXiv:2102.07078, DOI 10.48550/ARXIV.2102.07078]
  • [5] Dinh CT, 2020, ADV NEUR IN, V33
  • [6] Fallah A., 2020, NeurIPS
  • [7] Finn C, 2017, PR MACH LEARN RES, V70
  • [8] Fujimoto S, 2018, Arxiv, DOI arXiv:1802.09477
  • [9] Hsu TMH, 2019, Arxiv, DOI arXiv:1909.06335
  • [10] Advances and Open Problems in Federated Learning
    Kairouz, Peter
    McMahan, H. Brendan
    Avent, Brendan
    Bellet, Aurelien
    Bennis, Mehdi
    Bhagoji, Arjun Nitin
    Bonawitz, Kallista
    Charles, Zachary
    Cormode, Graham
    Cummings, Rachel
    D'Oliveira, Rafael G. L.
    Eichner, Hubert
    El Rouayheb, Salim
    Evans, David
    Gardner, Josh
    Garrett, Zachary
    Gascon, Adria
    Ghazi, Badih
    Gibbons, Phillip B.
    Gruteser, Marco
    Harchaoui, Zaid
    He, Chaoyang
    He, Lie
    Huo, Zhouyuan
    Hutchinson, Ben
    Hsu, Justin
    Jaggi, Martin
    Javidi, Tara
    Joshi, Gauri
    Khodak, Mikhail
    Konecny, Jakub
    Korolova, Aleksandra
    Koushanfar, Farinaz
    Koyejo, Sanmi
    Lepoint, Tancrede
    Liu, Yang
    Mittal, Prateek
    Mohri, Mehryar
    Nock, Richard
    Ozgur, Ayfer
    Pagh, Rasmus
    Qi, Hang
    Ramage, Daniel
    Raskar, Ramesh
    Raykova, Mariana
    Song, Dawn
    Song, Weikang
    Stich, Sebastian U.
    Sun, Ziteng
    Suresh, Ananda Theertha
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2021, 14 (1-2): : 1 - 210