Scaling Vision Transformers

被引:319
作者
Zhai, Xiaohua [1 ]
Kolesnikov, Alexander [1 ]
Houlsby, Neil [1 ]
Beyer, Lucas [1 ]
机构
[1] Google Res, Brain Team, Zurich, Switzerland
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.01179
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Attention-based neural networks such as the Vision Transformer (ViT) have recently attained state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient in attaining excellent results, therefore, understanding a model's scaling properties is a key to designing future generations effectively. While the laws for scaling Transformer language models have been studied, it is unknown how Vision Transformers scale. To address this, we scale ViT models and data, both up and down, and characterize the relationships between error rate, data, and compute. Along the way, we refine the architecture and training of ViT, reducing memory consumption and increasing accuracy of the resulting models. As a result, we successfully train a ViT model with two billion parameters, which attains a new state-of-the-art on ImageNet of 90.45% top-1 accuracy. The model also performs well for few-shot transfer, for example, reaching 84.86% top-1 accuracy on ImageNet with only 10 examples per class.
引用
收藏
页码:12094 / 12103
页数:10
相关论文
共 52 条
[1]  
Aka Osman, 2021, ARXIV210303417
[2]  
[Anonymous], 2019, Self-Training with Noisy Student Improves Imagenet Classification
[3]  
[Anonymous], 2019, ICML
[4]  
[Anonymous], 2012, PROC CVPR IEEE
[5]  
[Anonymous], 2019, ICML
[6]  
[Anonymous], 2020, ARXIV200607733
[7]  
[Anonymous], 2018, ICML
[8]  
Barbu Andrei, 2019, NeurIPS
[9]  
Bello I., 2021, NeurIPS, V34, P22614, DOI [10.48550/arXiv.2103.07579, DOI 10.48550/ARXIV.2103.07579]
[10]  
Beyer L., 2020, ARE WE DONE IMAGENET