FGSS: Federated global self-supervised framework for large-scale unlabeled data

被引:0
作者
Zhang, Chen [1 ]
Xie, Zixuan [1 ]
Yu, Bin [1 ]
Wen, Chao [2 ]
Xie, Yu [2 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Shaanxi, Peoples R China
[2] Shanxi Univ, Key Lab Computat Intelligence & Chinese Informat P, Minist Educ, Taiyuan 030006, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; Self-supervised learning; non-IID data; Aggregation method;
D O I
10.1016/j.asoc.2023.110453
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the unique advantages of collaborative learning on isolated yet unlabeled data, federated self-supervised learning has received increasing attention from both academic and industrial researchers. Most of the existing federated self-supervised approaches concentrate on the classical scenario, i.e., a large amount of unlabeled data is stored on the clients. However, in many real-world applications, partial labels may be available to the client user, while a large amount of unlabeled data remains on the server side. The existing federated self-supervised methods may usually have difficulty in addressing this scenario. In this paper, we propose a creative federated global self-supervised framework (FGSS) for large-scale unlabeled data that innovatively uses self-supervised learning on the server side, and during every round of communication, we use a small amount of labeled data from the client to facilitate the performance of self-supervised learning. To address the heterogeneity of local data from different clients, we designed an aggregation approach that can adjust the weight of each local model based on the frequency of participation in the communication and the size of its dataset. Experimental results show that our framework outperforms the most existing state-of-the-art methods in both IID and non-IID settings under certain conditions.& COPY; 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:9
相关论文
共 52 条
[1]   GPDS: A multi-agent deep reinforcement learning game for anti-jamming secure computing in MEC network [J].
Chen, Miaojiang ;
Liu, Wei ;
Zhang, Ning ;
Li, Junling ;
Ren, Yingying ;
Yi, Meng ;
Liu, Anfeng .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 210
[2]   A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems [J].
Chen, Miaojiang ;
Liu, Wei ;
Wang, Tian ;
Zhang, Shaobo ;
Liu, Anfeng .
KNOWLEDGE-BASED SYSTEMS, 2022, 235
[3]  
Chen T., 2020, Advances in Neural Information Processing Systems, V33, P22243
[4]   Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning [J].
Chen, Tianlong ;
Liu, Sijia ;
Chang, Shiyu ;
Cheng, Yu ;
Amini, Lisa ;
Wang, Zhangyang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :696-705
[5]  
Chen Ting, 2019, PMLR
[6]  
Chen XL, 2020, Arxiv, DOI arXiv:2003.04297
[7]  
Deng Y., 2020, Advances in neural information processing systems, V33, P15111
[8]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[9]  
Dinh CT, 2020, ADV NEUR IN, V33
[10]   Unsupervised Visual Representation Learning by Context Prediction [J].
Doersch, Carl ;
Gupta, Abhinav ;
Efros, Alexei A. .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1422-1430