Rethinking Personalized Client Collaboration in Federated Learning

被引:2
作者
Wu, Leijie [1 ]
Guo, Song [2 ]
Ding, Yaohong [1 ]
Wang, Junxiao [3 ]
Xu, Wenchao [1 ]
Zhan, Yufeng [4 ]
Kermarrec, Anne-Marie [5 ]
机构
[1] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[2] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[3] Guangzhou Univ, Guangzhou 511370, Guangdong, Peoples R China
[4] Beijing Inst Technol, Beijing 100811, Peoples R China
[5] Ecole Polytech Fed Lausanne EPFL, CH-1015 Lausanne, Switzerland
基金
中国国家自然科学基金;
关键词
Data models; Collaboration; Privacy; Measurement; Game theory; Training; Federated learning; Coalition game theory; multiwise collaboration; personalized federated learning; shapley value;
D O I
10.1109/TMC.2024.3396218
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) has gained considerable attention recently, as it allows clients to cooperatively train a global machine learning model without sharing raw data. However, its performance can be compromised due to the high heterogeneity in clients' local data distributions, commonly known as Non-IID (non-independent and identically distributed). Moreover, collaboration among highly dissimilar clients exacerbates this performance degradation. Personalized FL seeks to mitigate this by enabling clients to collaborate primarily with others who have similar data characteristics, thereby producing personalized models. We noticed that existing methods for assessing model similarity often do not capture the genuine relevance of client domains. In response, our paper enhances personalized client collaboration in FL by introducing a metric for domain relevance between clients. Specifically, to facilitate optimal coalition formation, we measure the marginal contributions of client models using coalition game theory, providing a more accurate representation of potential client domain relevance within the FL privacy-preserving framework. Based on this metric, we then adjust each client's coalition membership and implement a personalized FL aggregation algorithm that is robust to Non-IID data domain. We provide a theoretical analysis of the algorithm's convergence and generalization capabilities. Our extensive evaluations on multiple datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100, and under varying Non-IID data distributions (Pathological and Dirichlet), demonstrate that our personalized collaboration approach consistently outperforms contemporary benchmarks in terms of accuracy for individual clients.
引用
收藏
页码:11227 / 11239
页数:13
相关论文
共 43 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] A theory of learning from different domains
    Ben-David, Shai
    Blitzer, John
    Crammer, Koby
    Kulesza, Alex
    Pereira, Fernando
    Vaughan, Jennifer Wortman
    [J]. MACHINE LEARNING, 2010, 79 (1-2) : 151 - 175
  • [3] Polynomial calculation of the Shapley value based on sampling
    Castro, Javier
    Gomez, Daniel
    Tejada, Juan
    [J]. COMPUTERS & OPERATIONS RESEARCH, 2009, 36 (05) : 1726 - 1730
  • [4] Collins L, 2023, Arxiv, DOI [arXiv:2102.07078, DOI 10.48550/ARXIV.2102.07078]
  • [5] Dinh CT, 2020, ADV NEUR IN, V33
  • [6] Donahue K, 2021, AAAI CONF ARTIF INTE, V35, P5303
  • [7] Fudenberg D., 1991, Game Theory
  • [8] An Efficient Framework for Clustered Federated Learning
    Ghosh, Avishek
    Chung, Jichan
    Yin, Dong
    Ramchandran, Kannan
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (12) : 8076 - 8091
  • [9] Guo T., 2023, P ACM WEB C 2023, P1364
  • [10] PromptFL Let Federated Participants Cooperatively Learn Prompts Instead of Models - Federated Learning in Age of Foundation Model
    Guo, Tao
    Guo, Song
    Wang, Junxiao
    Tang, Xueyang
    Xu, Wenchao
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 5179 - 5194