Self-Distilled Self-supervised Representation Learning

被引:9
作者
Jang, Jiho [1 ]
Kim, Seonhoon [2 ]
Yoo, Kiyoon [1 ]
Kong, Chaerin [1 ]
Kim, Jangho [3 ]
Kwak, Nojun [1 ]
机构
[1] Seoul Natl Univ, Seoul, South Korea
[2] Coupang, Seoul, South Korea
[3] Kookmin Univ, Seoul, South Korea
来源
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2023年
关键词
D O I
10.1109/WACV56688.2023.00285
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
State-of-the-art frameworks in self-supervised learning have recently shown that fully utilizing transformer-based models can lead to performance boost compared to conventional CNN models. Striving to maximize the mutual information of two views of an image, existing works apply a contrastive loss to the final representations. Motivated by self-distillation in the supervised regime, we further exploit this by allowing the intermediate representations to learn from the final layer via the contrastive loss. Through self-distillation, the intermediate layers are better suited for instance discrimination, making the performance of an earlyexited sub-network not much degraded from that of the full network. This renders the pretext task easier also for the final layer, leading to better representations. Our method, Self-Distilled Self-Supervised Learning (SDSSL), outperforms competitive baselines (SimCLR, BYOL and MoCo v3) using ViT on various tasks and datasets. In the linear evaluation and k-NN protocol, SDSSL not only leads to superior performance in the final layers, but also in most of the lower layers. Furthermore, qualitative and quantitative analyses show how representations are formed more effectively along the transformer layers. Code is available at https://github.com/hagiss/SDSSL.
引用
收藏
页码:2828 / 2838
页数:11
相关论文
共 61 条
[51]   Data Distillation: Towards Omni-Supervised Learning [J].
Radosavovic, Ilija ;
Dollar, Piotr ;
Girshick, Ross ;
Gkioxari, Georgia ;
He, Kaiming .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4119-4128
[52]  
Raghu M, 2021, ADV NEUR IN, V34
[53]   iCaRL: Incremental Classifier and Representation Learning [J].
Rebuffi, Sylvestre-Alvise ;
Kolesnikov, Alexander ;
Sperl, Georg ;
Lampert, Christoph H. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5533-5542
[54]   YFCC100M: The New Data in Multimedia Research [J].
Thomee, Bart ;
Elizalde, Benjamin ;
Shamma, David A. ;
Ni, Karl ;
Friedland, Gerald ;
Poland, Douglas ;
Borth, Damian ;
Li, Li-Jia .
COMMUNICATIONS OF THE ACM, 2016, 59 (02) :64-73
[55]  
Touvron H, 2021, PR MACH LEARN RES, V139, P7358
[56]  
van der Maaten L, 2008, J MACH LEARN RES, V9, P2579
[57]  
Van Horn G, 2015, PROC CVPR IEEE, P595, DOI 10.1109/CVPR.2015.7298658
[58]  
Vaswani A, 2017, ADV NEUR IN, V30
[59]  
Welinder P., 2010, CALTECHUCSD BIRDS 20
[60]   Unsupervised Feature Learning via Non-Parametric Instance Discrimination [J].
Wu, Zhirong ;
Xiong, Yuanjun ;
Yu, Stella X. ;
Lin, Dahua .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3733-3742