Emerging Properties in Self-Supervised Vision Transformers

被引:3018
作者
Caron, Mathilde [1 ,2 ]
Touvron, Hugo [1 ,3 ]
Misra, Ishan [1 ]
Jegou, Herve [1 ]
Mairal, Julien [2 ]
Bojanowski, Piotr [1 ]
Joulin, Armand [1 ]
机构
[1] Facebook AI Res, Menlo Pk, CA 94025 USA
[2] Univ Grenoble Alpes, INRIA, CNRS, Grenoble INP,LJK, F-38000 Grenoble, France
[3] Sorbonne Univ, Paris, France
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.00951
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) [16] that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder [26], multi-crop training [9], and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.
引用
收藏
页码:9630 / 9640
页数:11
相关论文
共 69 条
[1]  
Anil Rohan, 2018, INT C LEARN REPR
[2]  
[Anonymous], 2018, CVPR, DOI DOI 10.1109/CVPR.2018.00975
[3]  
[Anonymous], 2017, ICML
[4]  
[Anonymous], 2020, NeurIPS
[5]  
[Anonymous], 2009, CIVR
[6]  
[Anonymous], 2008, CVPR
[7]  
[Anonymous], 2017, ARXIV170102810
[8]  
[Anonymous], ICML
[9]  
Asano Y. M., 2020, INT C LEARN REPR
[10]  
Avrithis Yannis, 2018, REVISITING OXFORD PA, P6