Personalized Federated Learning with Parameter Propagation

被引:5
作者
Wu, Jun [1 ]
Bao, Wenxuan [1 ]
Ainsworth, Elizabeth [2 ]
He, Jingrui [1 ]
机构
[1] Univ Illinois, Chicago, IL 60680 USA
[2] Univ Illinois, USDA ARS, Global Change & Photosynth Res Unit, Chicago, IL 60680 USA
来源
PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023 | 2023年
基金
美国食品与农业研究所; 美国国家科学基金会;
关键词
federated learning; parameter propagation; negative transfer;
D O I
10.1145/3580305.3599464
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With decentralized data collected from diverse clients, a personalized federated learning paradigm has been proposed for training machine learning models without exchanging raw data from local clients. We dive into personalized federated learning from the perspective of privacy-preserving transfer learning, and identify the limitations of previous personalized federated learning algorithms. First, previous works suffer from negative knowledge transferability for some clients, when focusing more on the overall performance of all clients. Second, high communication costs are required to explicitly learn statistical task relatedness among clients. Third, it is computationally expensive to generalize the learned knowledge from experienced clients to new clients. To solve these problems, in this paper, we propose a novel federated parameter propagation (FEDORA) framework for personalized federated learning. Specifically, we reformulate the standard personalized federated learning as a privacy-preserving transfer learning problem, with the goal of improving the generalization performance for every client. The crucial idea behind FEDORA is to learn how to transfer and whether to transfer simultaneously, including (1) adaptive parameter propagation: one client is enforced to adaptively propagate its parameters to others based on their task relatedness (e.g., explicitly measured by distribution similarity), and (2) selective regularization: each client would regularize its local personalized model with received parameters, only when those parameters are positively correlated with the generalization performance of its local model. The experiments on a variety of federated learning benchmarks demonstrate the effectiveness of the proposed FEDORA framework over state-of-the-art personalized federated learning baselines.
引用
收藏
页码:2594 / 2605
页数:12
相关论文
共 49 条
  • [41] Characterizing and Avoiding Negative Transfer
    Wang, Zirui
    Dai, Zihang
    Poczos, Barnabas
    Carbonell, Jaime
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11285 - 11294
  • [42] Federated Learning With Differential Privacy: Algorithms and Performance Analysis
    Wei, Kang
    Li, Jun
    Ding, Ming
    Ma, Chuan
    Yang, Howard H.
    Farokhi, Farhad
    Jin, Shi
    Quek, Tony Q. S.
    Vincent Poor, H.
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 3454 - 3469
  • [43] Indirect Invisible Poisoning Attacks on Domain Adaptation
    Wu, Jun
    He, Jingrui
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1852 - 1862
  • [44] Wu Jun, 2022, Advances in Neural Information Processing Systems
  • [45] Xiao H., 2017, ARXIV
  • [46] Xie Ming, 2020, Multi-center federated learning
  • [47] Zhang M., 2021, ICLR, P1
  • [48] Zhao Y., 2018, ARXIV180600582, P1
  • [49] Adversarial Robustness through Bias Variance Decomposition: A New Perspective for Federated Learning
    Zhou, Yao
    Wu, Jun
    Wang, Haixun
    He, Jingrui
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2753 - 2762