Convolutional Embedding Makes Hierarchical Vision Transformer Stronger

被引:16
作者
Wang, Cong [1 ,2 ]
Xu, Hongmin [1 ]
Zhang, Xiong [4 ]
Wang, Li [2 ]
Zheng, Zhitong [1 ]
Liu, Haifeng [1 ,3 ]
机构
[1] OPPO, Data & AI Engn Syst, Beijing, Peoples R China
[2] North China Univ Technol, Beijing Key Lab Urban Intelligent Traff Control T, Beijing, Peoples R China
[3] Univ Sci & Technol China, Hefei, Peoples R China
[4] Neolix Autonomous Vehicle, Beijing, Peoples R China
来源
COMPUTER VISION, ECCV 2022, PT XX | 2022年 / 13680卷
关键词
Vision Transformers; Convolutional neural networks; Convolutional embedding; Micro and macro design; NETWORK;
D O I
10.1007/978-3-031-20044-1_42
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision Transformers (ViTs) have recently dominated a range of computer vision tasks, yet it suffers from low training data efficiency and inferior local semantic representation capability without appropriate inductive bias. Convolutional neural networks (CNNs) inherently capture regional-aware semantics, inspiring researchers to introduce CNNs back into the architecture of the ViTs to provide desirable inductive bias for ViTs. However, is the locality achieved by the micro-level CNNs embedded in ViTs good enough? In this paper, we investigate the problem by profoundly exploring how the macro architecture of the hybrid CNNs/ViTs enhances the performances of hierarchical ViTs. Particularly, we study the role of token embedding layers, alias convolutional embedding (CE), and systemically reveal how CE injects desirable inductive bias in ViTs. Besides, we apply the optimal CE configuration to 4 recently released state-of-the-art ViTs, effectively boosting the corresponding performances. Finally, a family of efficient hybrid CNNs/ViTs, dubbed CETNets, are released, which may serve as generic vision backbones. Specifically, CETNets achieve 84.9% Top-1 accuracy on ImageNet-1K (training from scratch), 48.6% box mAP on the COCO benchmark, and 51.6% mIoU on the ADE20K, substantially improving the performances of the corresponding state-of-the-art baselines.
引用
收藏
页码:739 / 756
页数:18
相关论文
共 73 条
[1]  
Brown TB, 2020, Arxiv, DOI [arXiv:2005.14165, DOI 10.48550/ARXIV.2005.14165, 10.48550/arXiv.2005.14165]
[2]  
Ba J. L., 2016, arXiv, DOI 10.48550/arXiv:1607.06450
[3]   Attention Augmented Convolutional Networks [J].
Bello, Irwan ;
Zoph, Barret ;
Vaswani, Ashish ;
Shlens, Jonathon ;
Le, Quoc V. .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3285-3294
[4]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[5]  
Chen C.F., 2021, arXiv
[6]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[7]  
Chu X., 2021, arXiv, DOI 10.48550/arXiv.2104.13840
[8]  
Chu XX, 2021, Arxiv, DOI arXiv:2102.10882
[9]  
Cordonnier JB, 2020, Arxiv, DOI arXiv:1911.03584
[10]  
d'Ascoli S, 2021, Arxiv, DOI [arXiv:2103.10697, DOI 10.48550/ARXIV.2103.10697]