Personalized Federated Learning on long-tailed data via knowledge distillation and generated features

被引:0
作者
Lv, Fengling [1 ]
Qian, Pinxin [1 ,2 ]
Lu, Yang [1 ,2 ]
Wang, Hanzi [1 ,2 ]
机构
[1] Xiamen Univ, Key Lab Multimedia Trusted Percept & Efficient Com, Minist Educ China, Xiamen, Peoples R China
[2] Xiamen Univ, Sch Informat, Fujian Key Lab Sensing & Comp Smart City, Xiamen, Peoples R China
基金
中国国家自然科学基金;
关键词
Personalized Federated Learning; Knowledge distillation; Long-tailed learning;
D O I
10.1016/j.patrec.2024.09.024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Personalized Federated Learning (PFL) offers a novel paradigm for distributed learning, which aims to learn a personalized model for each client through collaborative training of all distributed clients in a privacy-preserving manner. However, the performance of personalized models is often compromised by data heterogeneity and the challenges of long-tailed distributions, both of which are common in real-world applications. In this paper, we explore the joint problem of data heterogeneity and long-tailed distribution in PFL and propose a corresponding solution called Personalized Federated Learning with Distillation and generated Features (PFLDF). Specifically, we employ a lightweight generator trained on the server to generate a balanced feature set for each client that can supplement local minority class information with global class information. This augmentation mechanism is a robust countermeasure against the adverse effects of data imbalance. Subsequently, we use knowledge distillation to transfer the knowledge of the global model to personalized models to improve their generalization performance. Extensive experimental results show the superiority of PFLDF compared to other state-of-the-art PFL methods with long-tailed data distribution.
引用
收藏
页码:178 / 183
页数:6
相关论文
共 48 条
[1]  
Achituve I, 2021, ADV NEUR IN, V34
[2]   Variational Information Distillation for Knowledge Transfer [J].
Ahn, Sungsoo ;
Hu, Shell Xu ;
Damianou, Andreas ;
Lawrence, Neil D. ;
Dai, Zhenwen .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :9155-9163
[3]  
Arivazhagan M. G., 2019, ARXIV
[4]   ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot [J].
Cai, Jiarui ;
Wang, Yizhou ;
Hwang, Jenq-Neng .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :112-121
[5]   SMOTE: Synthetic minority over-sampling technique [J].
Chawla, Nitesh V. ;
Bowyer, Kevin W. ;
Hall, Lawrence O. ;
Kegelmeyer, W. Philip .
2002, American Association for Artificial Intelligence (16)
[6]  
Chen GB, 2017, ADV NEUR IN, V30
[7]  
Chen H. -Y., 2021, INT C LEARN REPR
[8]   ResLT: Residual Learning for Long-Tailed Recognition [J].
Cui, Jiequan ;
Liu, Shu ;
Tian, Zhuotao ;
Zhong, Zhisheng ;
Jia, Jiaya .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) :3695-3706
[9]  
Drummond C., 2003, WORKSH LEARN IMB DAT, V11, P1
[10]  
Elkan C, 2001, INT JOINT C ART INT, V17, P973