A Survey on Vision Transformer

被引:2107
作者
Han, Kai [1 ]
Wang, Yunhe [1 ]
Chen, Hanting [1 ,2 ]
Chen, Xinghao [1 ]
Guo, Jianyuan [1 ]
Liu, Zhenhua [1 ,2 ]
Tang, Yehui [1 ,2 ]
Xiao, An [1 ]
Xu, Chunjing [1 ]
Xu, Yixing [1 ]
Yang, Zhaohui [1 ,2 ]
Zhang, Yiman
Tao, Dacheng [3 ]
机构
[1] Huawei Noahs Ark Lab, Beijing 100084, Peoples R China
[2] Peking Univ, Sch EECS, Beijing 100871, Peoples R China
[3] Univ Sydney, Fac Engn, Sch Comp Sci, Darlington, NSW 2008, Australia
关键词
Transformers; Task analysis; Encoding; Computer vision; Computational modeling; Visualization; Object detection; high-level vision; low-level vision; self-attention; transformer; video;
D O I
10.1109/TPAMI.2022.3152247
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent neural networks. Given its high performance and less need for vision-specific inductive bias, transformer is receiving more and more attention from the computer vision community. In this paper, we review these vision transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages. The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also include efficient transformer methods for pushing transformer into real device-based applications. Furthermore, we also take a brief look at the self-attention mechanism in computer vision, as it is the base component in transformer. Toward the end of this paper, we discuss the challenges and provide several further research directions for vision transformers.
引用
收藏
页码:87 / 110
页数:24
相关论文
共 245 条
[81]   UniT: Multimodal Multitask Learning with a Unified Transformer [J].
Hu, Ronghang ;
Singh, Amanpreet .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :1419-1429
[82]   HOT-Net: Non-Autoregressive Transformer for 3D Hand-Object Pose Estimation [J].
Huang, Lin ;
Tan, Jianchao ;
Meng, Jingjing ;
Liu, Ji ;
Yuan, Junsong .
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, :3136-3145
[83]   Hand-Transformer: Non-Autoregressive Structured Modeling for 3D Hand Pose Estimation [J].
Huang, Lin ;
Tan, Jianchao ;
Liu, Ji ;
Yuan, Junsong .
COMPUTER VISION - ECCV 2020, PT XXV, 2020, 12370 :17-33
[84]  
Huang ZL, 2021, Arxiv, DOI [arXiv:2106.03650, DOI 10.48550/ARXIV.2106.03650]
[85]  
Ioffe Sergey, 2015, Proceedings of Machine Learning Research, V37, P448
[86]  
Jaegle A, 2022, Arxiv, DOI arXiv:2107.14795
[87]  
Jaegle A, 2021, PR MACH LEARN RES, V139
[88]   Skeletor: Skeletal Transformers for Robust Body-Pose Estimation [J].
Jiang, Tao ;
Camgoz, Necati Cihan ;
Bowden, Richard .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, :3389-3397
[89]  
Jiang Y., 2021, PROC C NEURAL INFORM
[90]  
Jiang Z-H., 2020, Advances in Neural Information Processing Systems, V33, P12837, DOI [10.48550/arXiv.2008.02496, DOI 10.48550/ARXIV.2008.02496]