Federated Self-Supervised Learning Based on Prototypes Clustering Contrastive Learning for Internet of Vehicles Applications

被引:1
|
作者
Dai, Cheng [1 ]
Wei, Shuai [1 ]
Dai, Shengxin [1 ]
Garg, Sahil [2 ,3 ]
Kaddoum, Georges [2 ,4 ]
Hossain, M. Shamim [5 ]
机构
[1] Sichuan Univ, Sch Comp Sci, Chengdu 610042, Peoples R China
[2] Ecole Technol Super, Elect Engn Dept, Montreal, PQ H3C 1K3, Canada
[3] Chitkara Univ, Inst Engn & Technol, Ctr Res Impact & Outcome, Rajpura 140401, India
[4] Lebanese Amer Univ, Artificial Intelligence & Cyber Syst Res Ctr, Beirut 03797751, Lebanon
[5] King Saud Univ, Coll Comp & Informat Sci, Dept Software Engn, Riyadh 12372, Saudi Arabia
来源
IEEE INTERNET OF THINGS JOURNAL | 2025年 / 12卷 / 05期
基金
中国国家自然科学基金;
关键词
Prototypes; Servers; Data models; Training; Computational modeling; Biological system modeling; Contrastive learning; federated learning (FL); Internet of Things (IoT); self-supervised learning;
D O I
10.1109/JIOT.2024.3453336
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a novel paradigm for distribute edge intelligence for the Internet-of-Vehicles (IoV) application, which can enable superior performance in model training without the need to share local data. However, in the actual architecture of FL, the existence of nonindependent and identically distributed (non-IID) data at the edge device, along with the involvement of randomly participating distributed nodes, can result in model bias and a subsequent decrease in overall performance. To solve this problem, a new federated self-supervised learning method based on prototypes clustering contrastive learning (FedPCC) is proposed, which can effectively addresses the issue of asynchronous edge training and global model bias by introducing an unsupervised prototypes layer. The prototypes layer maps edge features to a global space and performs clustering, facilitating the new aggregation method of global prototypes on the server. Then, models from other components are aggregated based on data weight. Besides that, during the parameter deployment phase, we replace the prototype layer to acquire global knowledge, while employing momentum updates to preserve the local knowledge of the other components. Finally, to assess the efficacy of our proposed approach, we carried out comprehensive experiments across the various data sets. The findings show that our method gains state-of-the-art performance, which also validates its effectiveness.
引用
收藏
页码:4692 / 4700
页数:9
相关论文
共 50 条
  • [1] Contrastive and Non-Contrastive Strategies for Federated Self-Supervised Representation Learning and Deep Clustering
    Miao, Runxuan
    Koyuncu, Erdem
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (06) : 1070 - 1084
  • [2] Memory Bank Clustering for Self-supervised Contrastive Learning
    Hao, Yiqing
    An, Gaoyun
    Ruan, Qiuqi
    IMAGE AND GRAPHICS TECHNOLOGIES AND APPLICATIONS, IGTA 2021, 2021, 1480 : 132 - 144
  • [3] Federated Graph Anomaly Detection via Contrastive Self-Supervised Learning
    Kong, Xiangjie
    Zhang, Wenyi
    Wang, Hui
    Hou, Mingliang
    Chen, Xin
    Yan, Xiaoran
    Das, Sajal K.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [4] Self-supervised learning of monocular depth estimators in autonomous vehicles with federated learning
    Soares, Elton F. de S.
    Campos, Carlos Alberto V.
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 151
  • [5] Adversarial Self-Supervised Contrastive Learning
    Kim, Minseon
    Tack, Jihoon
    Hwang, Sung Ju
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [6] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [7] Self-Supervised Learning: Generative or Contrastive
    Liu, Xiao
    Zhang, Fanjin
    Hou, Zhenyu
    Mian, Li
    Wang, Zhaoyu
    Zhang, Jing
    Tang, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 857 - 876
  • [8] Self-supervised Variational Contrastive Learning with Applications to Face Understanding
    Yavuz, Mehmet Can
    Yanikoglu, Berrin
    2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024, 2024,
  • [9] A Simple and Effective Usage of Self-supervised Contrastive Learning for Text Clustering
    Shi, Haoxiang
    Wang, Cen
    Sakai, Tetsuya
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 315 - 320
  • [10] Self-supervised contrastive representation learning for classifying Internet of Things malware
    Wang, Fangwei
    Chen, Yinhe
    Gao, Hongfeng
    Li, Qingru
    Wang, Changguang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 150