FedUTN: federated self-supervised learning with updating target network

被引:9
作者
Li, Simou [1 ]
Mao, Yuxing [1 ]
Li, Jian [1 ]
Xu, Yihang [1 ]
Li, Jinsen [1 ]
Chen, Xueshuo [1 ]
Liu, Siyang [1 ,2 ]
Zhao, Xianping [2 ]
机构
[1] Chongqing Univ, State Key Lab Power Transmiss Equipment & Syst Se, Chongqing 400044, Peoples R China
[2] Yunnan Power Grid Co Ltd, Elect Power Res Inst, Kunming 650217, Yunnan, Peoples R China
关键词
Computer vision; Self-supervised learning; Federated learning; Federated self-supervised learning;
D O I
10.1007/s10489-022-04070-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning (SSL) is capable of learning noteworthy representations from unlabeled data, which has mitigated the problem of insufficient labeled data to a certain extent. The original SSL method centered on centralized data, but the growing awareness of privacy protection restricts the sharing of decentralized, unlabeled data generated by a variety of mobile devices, such as cameras, phones, and other terminals. Federated Self-supervised Learning (FedSSL) is the result of recent efforts to create Federated learning, which is always used for supervised learning using SSL. Informed by past work, we propose a new FedSSL framework, FedUTN. This framework aims to permit each client to train a model that works well on both independent and identically distributed (IID) and independent and non-identically distributed (non-IID) data. Each party possesses two asymmetrical networks, a target network and an online network. FedUTN first aggregates the online network parameters of each terminal and then updates the terminals' target network with the aggregated parameters, which is a radical departure from the update technique utilized in earlier studies. In conjunction with this method, we offer a novel control algorithm to replace EMA for the training operation. After extensive trials, we demonstrate that: (1) the feasibility of utilizing the aggregated online network to update the target network. (2) FedUTN's aggregation strategy is simpler, more effective, and more robust. (3) FedUTN outperforms all other prevalent FedSSL algorithms and outperforms the SOTA algorithm by 0.5%similar to 1.6% under regular experiment con1ditions.
引用
收藏
页码:10879 / 10892
页数:14
相关论文
共 29 条
[1]  
Chen T, 2020, PR MACH LEARN RES, V119
[2]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[3]   Unsupervised Visual Representation Learning by Context Prediction [J].
Doersch, Carl ;
Gupta, Abhinav ;
Efros, Alexei A. .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1422-1430
[4]  
Dosovitskiy A., 2021, arXiv
[5]   Self-Balancing Federated Learning With Global Imbalanced Data in Mobile Systems [J].
Duan, Moming ;
Liu, Duo ;
Chen, Xianzhang ;
Liu, Renping ;
Tan, Yujuan ;
Liang, Liang .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (01) :59-71
[6]  
Gidaris S, 2018, INT C LEARN REPR
[7]  
Grill JB., 2020, P NEURIPS, V33, P21271
[8]  
Hamer J., 2020, P MACHINE LEARNING R, V1, P3973
[9]   Masked Autoencoders Are Scalable Vision Learners [J].
He, Kaiming ;
Chen, Xinlei ;
Xie, Saining ;
Li, Yanghao ;
Dollar, Piotr ;
Girshick, Ross .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :15979-15988
[10]   Momentum Contrast for Unsupervised Visual Representation Learning [J].
He, Kaiming ;
Fan, Haoqi ;
Wu, Yuxin ;
Xie, Saining ;
Girshick, Ross .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :9726-9735