Model Decomposition and Reassembly for Purified Knowledge Transfer in Personalized Federated Learning

被引:1
作者
Zhang, Jie [1 ]
Guo, Song [1 ]
Ma, Xiaosong [2 ]
Xu, Wenchao [2 ]
Zhou, Qihua [3 ]
Guo, Jingcai [2 ]
Hong, Zicong [2 ]
Shan, Jun [4 ]
机构
[1] Hong Kong Univ Sci & Technol HKUST, Hong Kong 999077, Peoples R China
[2] Hong Kong Polytech Univ, Hong Kong 999077, Peoples R China
[3] Shenzhen Univ, Shenzhen 518060, Peoples R China
[4] Hong Kong Chu Hai Coll, Hong Kong 999077, Peoples R China
关键词
Training; Feature extraction; Knowledge transfer; Data models; Federated learning; Collaboration; Mobile computing; model decomposition; multi-task learning; personalized federated learning;
D O I
10.1109/TMC.2024.3466227
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Personalized federated learning (pFL) is to collaboratively train non-identical machine learning models for different clients to adapt to their heterogeneously distributed datasets. State-of-the-art pFL approaches pay much attention on exploiting clients' inter-similarities to facilitate the collaborative learning process, meanwhile, can barely escape from the irrelevant knowledge pooling that is inevitable during the aggregation phase, and thus hindering the optimization convergence and degrading the personalization performance. To tackle such conflicts between facilitating collaboration and promoting personalization, we propose a novel pFL framework, dubbed pFedC, to first decompose the global aggregated knowledge into several compositional branches, and then selectively reassemble the relevant branches for supporting conflicts-aware collaboration among contradictory clients. Specifically, by reconstructing each local model into a shared feature extractor and multiple decomposed task-specific classifiers, the training on each client transforms into a mutually reinforced and relatively independent multi-task learning process, which provides a new perspective for pFL. Besides, we conduct a purified knowledge aggregation mechanism via quantifying the combination weights for each client to capture clients' common prior, as well as mitigate potential conflicts from the divergent knowledge caused by the heterogeneous data. Extensive experiments over various models and datasets demonstrate the effectiveness and superior performance of the proposed algorithm.
引用
收藏
页码:379 / 393
页数:15
相关论文
共 44 条
[1]  
Chen Z, 2018, PR MACH LEARN RES, V80
[2]  
Collins L, 2021, PR MACH LEARN RES, V139
[3]  
Dinh CT, 2020, ADV NEUR IN, V33
[4]   Self-Balancing Federated Learning With Global Imbalanced Data in Mobile Systems [J].
Duan, Moming ;
Liu, Duo ;
Chen, Xianzhang ;
Liu, Renping ;
Tan, Yujuan ;
Liang, Liang .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (01) :59-71
[5]  
Fallah A, 2020, ADV NEUR IN, V33
[6]   An Efficient Framework for Clustered Federated Learning [J].
Ghosh, Avishek ;
Chung, Jichan ;
Yin, Dong ;
Ramchandran, Kannan .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (12) :8076-8091
[7]  
Hanzely F, 2020, ADV NEUR IN, V33
[8]  
Hard A, 2019, Arxiv, DOI arXiv:1811.03604
[9]   Neural Multi-task Learning for Teacher Question Detection in Online Classrooms [J].
Huang, Gale Yan ;
Chen, Jiahao ;
Liu, Haochen ;
Fu, Weiping ;
Ding, Wenbiao ;
Tang, Jiliang ;
Yang, Songfan ;
Li, Guoliang ;
Liu, Zitao .
ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2020), PT I, 2020, 12163 :269-281
[10]  
Huang YT, 2021, AAAI CONF ARTIF INTE, V35, P7865