Multi-Level Personalized Federated Learning on Heterogeneous and Long-Tailed Data

被引:3
作者
Zhang, Rongyu [1 ,2 ]
Chen, Yun [1 ,2 ]
Wu, Chenrui [1 ,2 ]
Wang, Fangxin [3 ,4 ]
Li, Bo [5 ]
机构
[1] Chinese Univ Hong Kong, Future Network Intelligence Inst FNii, Shenzhen 518172, Guangdong, Peoples R China
[2] Chinese Univ Hong Kong, Sch Sci & Engn SSE, Shenzhen 518172, Guangdong, Peoples R China
[3] Chinese Univ Hong Kong, Future Network Intelligence Inst FNii, Sch Sci & Engn SSE, Shenzhen 518172, Guangdong, Peoples R China
[4] Chinese Univ Hong Kong, Guangdong Prov Key Lab Future Networks Intelligenc, Shenzhen 518172, Guangdong, Peoples R China
[5] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
关键词
Training; Computational modeling; Data models; Federated learning; Adaptation models; Mobile computing; Tail; Autonomous driving; clustering; dropout; federated learning; long-tailed learning; personalization; INTERNET;
D O I
10.1109/TMC.2024.3409159
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) offers a privacy-centric distributed learning framework, enabling model training on individual clients and central aggregation without necessitating data exchange. Nonetheless, FL implementations often suffer from non-i.i.d. and long-tailed class distributions across mobile applications, e.g., autonomous vehicles, which leads models to overfitting as local training may converge to sub-optimal. In our study, we explore the impact of data heterogeneity on model bias and introduce an innovative personalized FL framework, Multi-level Personalized Federated Learning (MuPFL), which leverages the hierarchical architecture of FL to fully harness computational resources at various levels. This framework integrates three pivotal modules: Biased Activation Value Dropout (BAVD) to mitigate overfitting and accelerate training; Adaptive Cluster-based Model Update (ACMU) to refine local models ensuring coherent global aggregation; and Prior Knowledge-assisted Classifier Fine-tuning (PKCF) to bolster classification and personalize models in accord with skewed local data with shared knowledge. Extensive experiments on diverse real-world datasets for image classification and semantic segmentation validate that MuPFL consistently outperforms state-of-the-art baselines, even under extreme non-i.i.d. and long-tail conditions, which enhances accuracy by as much as 7.39% and accelerates training by up to 80% at most, marking significant advancements in both efficiency and effectiveness.
引用
收藏
页码:12396 / 12409
页数:14
相关论文
共 57 条
[1]  
Acar D. A. E., 2021, P ICLR
[2]   Federated learning with hierarchical clustering of local updates to improve training on non-IID data [J].
Briggs, Christopher ;
Fan, Zhong ;
Andras, Peter .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[3]  
Cao KD, 2019, ADV NEUR IN, V32
[4]  
Chen H.-Y., 2021, P INT C LEARN REPR
[5]   Feature Space Augmentation for Long-Tailed Data [J].
Chu, Peng ;
Bian, Xiao ;
Liu, Shaopeng ;
Ling, Haibin .
COMPUTER VISION - ECCV 2020, PT XXIX, 2020, 12374 :694-710
[6]  
Collins L, 2021, PR MACH LEARN RES, V139
[7]   The Cityscapes Dataset for Semantic Urban Scene Understanding [J].
Cordts, Marius ;
Omran, Mohamed ;
Ramos, Sebastian ;
Rehfeld, Timo ;
Enzweiler, Markus ;
Benenson, Rodrigo ;
Franke, Uwe ;
Roth, Stefan ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3213-3223
[8]  
Dai YT, 2023, AAAI CONF ARTIF INTE, P7314
[9]   FAIR: Quality-Aware Federated Learning with Precise User Incentive and Model Aggregation [J].
Deng, Yongheng ;
Lyu, Feng ;
Ren, Ju ;
Chen, Yi-Chao ;
Yang, Peng ;
Zhou, Yuezhi ;
Zhang, Yaoxue .
IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
[10]   Beef Up the Edge: Spectrum-Aware Placement of Edge Computing Services for the Internet of Things [J].
Ding, Haichuan ;
Guo, Yuanxiong ;
Li, Xuanheng ;
Fang, Yuguang .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2019, 18 (12) :2783-2795