Distill on the Go: Online knowledge distillation in self-supervised learning

被引:27
作者
Bhat, Prashant [1 ]
Arani, Elahe [1 ]
Zonooz, Bahram [1 ]
机构
[1] NavInfo Europe, Adv Res Lab, Eindhoven, Netherlands
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021 | 2021年
关键词
D O I
10.1109/CVPRW53098.2021.00301
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning solves pretext prediction tasks that do not require annotations to learn feature representations. For vision tasks, pretext tasks such as predicting rotation, solving jigsaw are solely created from the input data. Yet, predicting this known information helps in learning representations useful for downstream tasks. However, recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models. To address the issue of self-supervised pre-training of smaller models, we propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation to improve the representation quality of the smaller models. We employ deep mutual learning strategy in which two models collaboratively learn from each other to improve one another. Specifically, each model is trained using self-supervised learning along with distillation that aligns each model's softmax probabilities of similarity scores with that of the peer model. We conduct extensive experiments on multiple benchmark datasets, learning objectives, and architectures to demonstrate the potential of our proposed method. Our results show significant performance gain in the presence of noisy and limited labels, and in generalization to out-of-distribution data.
引用
收藏
页码:2672 / 2681
页数:10
相关论文
共 47 条
[1]  
[Anonymous], 2017, BILLION SCALE SIMILA
[2]  
[Anonymous], 2015, Unsupervised visual representation learning by context prediction
[3]  
[Anonymous], MINE MUTUAL INFORM N
[4]  
Arpit Devansh, 2017, A Closer Look at Memorization in Deep Networks
[5]  
Ba LJ, 2014, ADV NEUR IN, V27
[6]  
Caron Mathilde, 2021, Unsupervised learning of visual features by contrasting cluster assignments
[7]  
Chen T, 2020, PR MACH LEARN RES, V119
[8]  
Chen Ting, 2020, BIG SELF SUPERVISED
[9]  
Chen Xinlei, 2020, EXPLORING SIMPLE SIA
[10]  
Chuang Ching-Yao, 2020, Advances in neural information processing systems, V33, P8765