Exploring the Equivalence of Siamese Self-Supervised Learning via A Unified Gradient Framework

被引:26
作者
Tao, Chenxin [1 ,2 ]
Wang, Honghui [1 ,2 ]
Zhu, Xizhou [2 ]
Dong, Jiahua [2 ,3 ]
Song, Shiji [1 ]
Huang, Gao [1 ,4 ]
Dai, Jifeng [1 ,4 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] SenseTime Res, Hong Kong, Peoples R China
[3] Zhejiang Univ, Hangzhou, Peoples R China
[4] Beijing Acad Artificial Intelligence, Beijing, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
基金
国家重点研发计划;
关键词
D O I
10.1109/CVPR52688.2022.01403
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning has shown its great potential to extract powerful visual representations without human annotations. Various works are proposed to deal with self-supervised learning from different perspectives: (1) contrastive learning methods (e.g., MoCo, SimCLR) utilize both positive and negative samples to guide the training direction; (2) asymmetric network methods (e.g., BYOL, Sim-Siam) get rid of negative samples via the introduction of a predictor network and the stop-gradient operation; (3) feature decorrelation methods (e.g., Barlow Twins, VICReg) instead aim to reduce the redundancy between feature dimensions. These methods appear to be quite different in the designed loss functions from various motivations. The final accuracy numbers also vary, where different networks and tricks are utilized in different works. In this work, we demonstrate that these methods can be unified into the same form. Instead of comparing their loss functions, we derive a unified formula through gradient analysis. Furthermore, we conduct fair and detailed experiments to compare their performances. It turns out that there is little gap between these methods, and the use of momentum encoder is the key factor to boost performance. From this unified framework, we propose UniGrad, a simple but effective gradient form for self-supervised learning. It does not require a memory bank or a predictor network, but can still achieve state-of-the-art performance and easily adopt other training strategies. Extensive experiments on linear evaluation and many downstream tasks also show its effectiveness. Code shall be released.
引用
收藏
页码:14411 / 14420
页数:10
相关论文
共 36 条
[1]  
Bardes A, 2022, Arxiv, DOI [arXiv:2105.04906, DOI 10.48550/ARXIV.2105.04906]
[2]  
Bromley J., 1993, International Journal of Pattern Recognition and Artificial Intelligence, V7, P669, DOI 10.1142/S0218001493000339
[3]  
Cao Yue, 2020, NEURIPS
[4]  
Caron M, 2020, ADV NEUR IN, V33
[5]   Emerging Properties in Self-Supervised Vision Transformers [J].
Caron, Mathilde ;
Touvron, Hugo ;
Misra, Ishan ;
Jegou, Herve ;
Mairal, Julien ;
Bojanowski, Piotr ;
Joulin, Armand .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9630-9640
[6]  
Chen T, 2020, PR MACH LEARN RES, V119
[7]  
Chen XL, 2021, Arxiv, DOI arXiv:2104.02057
[8]  
Chen XL, 2020, Arxiv, DOI arXiv:2003.04297
[9]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[10]   Learning a similarity metric discriminatively, with application to face verification [J].
Chopra, S ;
Hadsell, R ;
LeCun, Y .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :539-546