ConViT: improving vision transformers with soft convolutional inductive biases

被引:485
作者
d'Ascoli, Stephane [1 ,2 ]
Touvron, Hugo [2 ]
Leavitt, Matthew L. [2 ]
Morcos, Ari S. [2 ]
Biroli, Giulio [1 ,2 ]
Sagun, Levent [2 ]
机构
[1] Ecole Normale Super, Dept Phys, Paris, France
[2] Facebook AI Res, Paris, France
关键词
deep learning; machine learning; STATISTICS;
D O I
10.1088/1742-5468/ac9830
中图分类号
O3 [力学];
学科分类号
08 ; 0801 ;
摘要
Convolutional architectures have proven to be extremely successful for vision tasks. Their hard inductive biases enable sample-efficient learning, but come at the cost of a potentially lower performance ceiling. Vision transformers rely on more flexible self-attention layers, and have recently outperformed CNNs for image classification. However, they require costly pre-training on large external datasets or distillation from pre-trained convolutional networks. In this paper, we ask the following question: is it possible to combine the strengths of these two architectures while avoiding their respective limitations? To this end, we introduce gated positional self-attention (GPSA), a form of positional self-attention which can be equipped with a 'soft' convolutional inductive bias. We initialize the GPSA layers to mimic the locality of convolutional layers, then give each attention head the freedom to escape locality by adjusting a gating parameter regulating the attention paid to position versus content information. The resulting convolutional-like ViT architecture, ConViT, outperforms the DeiT (Touvron et al 2020 arXiv: 2012.12877) on ImageNet, while offering a much improved sample efficiency. We further investigate the role of locality in learning by first quantifying how it is encouraged in vanilla self-attention layers, then analyzing how it has escaped in GPSA layers. We conclude by presenting various ablations to better understand the success of the ConViT. Our code and models are released publicly at https://github.com/facebookresearch/convit.
引用
收藏
页数:27
相关论文
共 48 条
[1]  
Abnar S, 2020, Arxiv, DOI arXiv:2006.00555
[2]  
Anandkumar A, 2017, Arxiv, DOI arXiv:1610.09322
[3]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[4]   Attention Augmented Convolutional Networks [J].
Bello, Irwan ;
Zoph, Barret ;
Vaswani, Ashish ;
Shlens, Jonathon ;
Le, Quoc V. .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3285-3294
[5]  
Carion N, 2020, Arxiv, DOI arXiv:2005.12872
[6]   UNITER: UNiversal Image-TExt Representation Learning [J].
Chen, Yen-Chun ;
Li, Linjie ;
Yu, Licheng ;
El Kholy, Ahmed ;
Ahmed, Faisal ;
Gan, Zhe ;
Cheng, Yu ;
Liu, Jingjing .
COMPUTER VISION - ECCV 2020, PT XXX, 2020, 12375 :104-120
[7]  
Chen YP, 2018, Arxiv, DOI arXiv:1810.11579
[8]  
Cordonnier JB, 2020, Arxiv, DOI arXiv:1911.03584
[9]  
dAscoli S., 2019, ADV NEURAL INFORM PR, P9334
[10]  
Devlin J, 2019, Arxiv, DOI [arXiv:1810.04805, 10.48550/arXiv.1810.04805]