Sparsified Random Partial Model Update for Personalized Federated Learning

被引:0
|
作者
Hu, Xinyi [1 ]
Chen, Zihan [2 ]
Feng, Chenyuan [3 ]
Min, Geyong [4 ]
Quek, Tony Q. S. [2 ]
Yang, Howard H. [1 ]
机构
[1] Zhejiang Univ, JU UIUC Inst, Haining 314400, Peoples R China
[2] Singapore Univ Technol & Design, Informat Syst Technol & Design Pillar, Singapore City 487372, Singapore
[3] Eurecom, F-06410 Sophia Antipolis, France
[4] Univ Exeter, Dept Comp Sci, Exeter EX4 4QF, England
基金
国家重点研发计划; 中国国家自然科学基金; 新加坡国家研究基金会;
关键词
Training; Servers; Computational modeling; Data models; Mobile computing; Federated learning; Context modeling; Optimization; Adaptation models; Convergence; Client clustering; convergence rate; personalized federated learning; sparsification;
D O I
10.1109/TMC.2024.3507286
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) stands as a privacy-preserving machine learning paradigm that enables collaborative training of a global model across multiple clients. However, the practical implementation of FL models often confronts challenges arising from data heterogeneity and limited communication resources. To address the aforementioned issues simultaneously, we develop a Sparsified Random Partial Update framework for personalized Federated Learning (SRP-pFed), which builds upon the foundation of dynamic partial model updates. Specifically, we decouple the local model into personal and shared parts to achieve personalization. For each client, the ratio of its personal part associated with the local model, referred to as the update rate, is regularly renewed over the training procedure via a random walk process endowed with reinforced memory. In each global iteration, clients are clustered into different groups where the ones in the same group share a common update rate. Benefiting from such design, SRP-pFed realizes model personalization while substantially reducing communication costs in the uplink transmissions. We conduct extensive experiments on various training tasks with diverse heterogeneous data settings. The results demonstrate that the SRP-pFed consistently outperforms the state-of-the-art methods in test accuracy and communication efficiency.
引用
收藏
页码:3076 / 3091
页数:16
相关论文
共 50 条
  • [1] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [2] Collaborative Neural Architecture Search for Personalized Federated Learning
    Liu, Yi
    Guo, Song
    Zhang, Jie
    Hong, Zicong
    Zhan, Yufeng
    Zhou, Qihua
    IEEE TRANSACTIONS ON COMPUTERS, 2025, 74 (01) : 250 - 262
  • [3] Secure and Accurate Personalized Federated Learning With Similarity-Based Model Aggregation
    Tan, Zhouyong
    Le, Junqing
    Yang, Fan
    Huang, Min
    Xiang, Tao
    Liao, Xiaofeng
    IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2025, 10 (01): : 132 - 145
  • [4] Federated Learning With Sparsified Model Perturbation: Improving Accuracy Under Client-Level Differential Privacy
    Hu, Rui
    Guo, Yuanxiong
    Gong, Yanmin
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (08) : 8242 - 8255
  • [5] ClassTer: Mobile Shift-Robust Personalized Federated Learning via Class-Wise Clustering
    Li, Xiaochen
    Liu, Sicong
    Zhou, Zimu
    Xu, Yuan
    Guo, Bin
    Yu, Zhiwen
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (03) : 2014 - 2028
  • [6] Efficient Wireless Federated Learning With Partial Model Aggregation
    Chen, Zhixiong
    Yi, Wenqiang
    Shin, Hyundong
    Nallanathan, Arumugam
    Li, Geoffrey Ye
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (10) : 6271 - 6286
  • [7] Improved Modulation Recognition Using Personalized Federated Learning
    Rahman, Ratun
    Nguyen, Dinh C.
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (12) : 19937 - 19942
  • [8] Enhancing Decentralized and Personalized Federated Learning With Topology Construction
    Chen, Suo
    Xu, Yang
    Xu, Hongli
    Ma, Zhenguo
    Wang, Zhiyuan
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 9692 - 9707
  • [9] CPPer-FL: Clustered Parallel Training for Efficient Personalized Federated Learning
    Zhang, Ran
    Liu, Fangqi
    Liu, Jiang
    Chen, Mingzhe
    Tang, Qinqin
    Huang, Tao
    Yu, F. Richard
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 9424 - 9436
  • [10] Resource-Aware Personalized Federated Learning Based on Reinforcement Learning
    Wu, Tingting
    Li, Xiao
    Gao, Pengpei
    Yu, Wei
    Xin, Lun
    Guo, Manxue
    IEEE COMMUNICATIONS LETTERS, 2025, 29 (01) : 175 - 179