Mitigating Dimensional Collapse and Model Drift in Non-IID Data of Federated Learning

被引:0
作者
Jiang, Ming [1 ,2 ,3 ]
Li, Yun [1 ]
Lu, Yao [1 ,2 ,3 ]
Guo, Biao [1 ]
Zhang, Feng [1 ,2 ,3 ]
机构
[1] Guilin Univ Elect Technol, Sch Comp Sci & Informat Secur, Guilin 541004, Guangxi, Peoples R China
[2] Metaverse Applicat Engn Ctr, Nanning 530000, Guangxi, Peoples R China
[3] Guangxi Inst Digital Technol, Nanning 530000, Guangxi, Peoples R China
来源
NEURAL COMPUTING FOR ADVANCED APPLICATIONS, NCAA 2024, PT III | 2025年 / 2183卷
关键词
Federated Learning; Non-IID; Dimension Collapse; Contrastive Learning;
D O I
10.1007/978-981-97-7007-6_19
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
One of the key challenges in federated learning is addressing the non-independent and identically distributed (Non-IID) data among parties, which can lead to divergence of local model parameters and decreased convergence accuracy of the global model, along with serious dimension collapse issues. In this paper, we introduce VICON (Variance-Invariance-Covariance model Contrastive Learning), a method to prevent dimension collapse issues. In local training, particularly, a regularization technique is employed to foster orthogonal feature representations across dimensions and to sustain the variance of individual embedded dimensions above a predefined level. Complementary to this, contrastive learning methodologies are utilized to cluster-like instances while dissociating divergent ones, further enhancing discriminative capabilities. It helps control the model's parameter norm and adapt it to high-dimensional data, reducing information loss and aligning local models with the global optimization objective in federated learning to minimize bias and collapse. Extensive experiments show that VICON performs better than other algorithms such as federated averaging (FedAvg), federated proximal optimization (FedProx), and model-agnostic federated learning (Moon). Compared to the Moon algorithm, it not only improves accuracy by 2.2% to 3.7%, but also enables efficient communication, achieves higher accuracy, and remains robust when dealing with imbalanced data and uncertain local updates.
引用
收藏
页码:270 / 283
页数:14
相关论文
共 28 条
[1]  
Bardes A, 2022, Arxiv, DOI [arXiv:2105.04906, DOI 10.48550/ARXIV.2105.04906]
[2]  
Chen T, 2020, PR MACH LEARN RES, V119
[3]   Anomalous layer-dependent lubrication on graphene-covered substrate: Competition between adhesion and plasticity [J].
Chen, Yongchao ;
Guan, Zhizi ;
Liu, Jingnan ;
Yang, Wei ;
Wang, Hailong .
APPLIED SURFACE SCIENCE, 2022, 598
[4]  
Ermolov A, 2021, PR MACH LEARN RES, V139
[5]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[6]   Federated Visual Classification with Real-World Data Distribution [J].
Hsu, Tzu-Ming Harry ;
Qi, Hang ;
Brown, Matthew .
COMPUTER VISION - ECCV 2020, PT X, 2020, 12355 :76-92
[7]   Learn from Others and Be Yourself in Heterogeneous Federated Learning [J].
Huang, Wenke ;
Ye, Mang ;
Du, Bo .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :10133-10143
[8]  
Jing L., 2021, arXiv
[9]  
Karimireddy Sai Praneeth, 2020, ICML
[10]   Model-Contrastive Federated Learning [J].
Li, Qinbin ;
He, Bingsheng ;
Song, Dawn .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :10708-10717