Super Vision Transformer

被引:9
作者
Lin, Mingbao [1 ,2 ]
Chen, Mengzhao [1 ]
Zhang, Yuxin [1 ]
Shen, Chunhua [3 ]
Ji, Rongrong [1 ]
Cao, Liujuan [1 ]
机构
[1] Minist Educ China, Sch Informat, Key Lab Multimedia Trusted Percept & Efficient Com, Xiamen, Peoples R China
[2] Tencent Youtu Lab, Shanghai, Peoples R China
[3] Zhejiang Univ, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Hardware efficiency; Supernet; Vision transformer;
D O I
10.1007/s11263-023-01861-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We attempt to reduce the computational costs in vision transformers (ViTs), which increase quadratically in the token number. We present a novel training paradigm that trains only one ViT model at a time, but is capable of providing improved image recognition performance with various computational costs. Here, the trained ViT model, termed super vision transformer (SuperViT), is empowered with the versatile ability to solve incoming patches of multiple sizes as well as preserve informative tokens with multiple keeping rates (the ratio of keeping tokens) to achieve good hardware efficiency for inference, given that the available hardware resources often change from time to time. Experimental results on ImageNet demonstrate that our SuperViT can considerably reduce the computational costs of ViT models with even performance increase. For example, we reduce 2 x FLOPs of DeiT-S while increasing the Top-1 accuracy by 0.2% and 0.7% for 1.5 x reduction. Also, our SuperViT significantly outperforms existing studies on efficient vision transformers. For example, when consuming the same amount of FLOPs, our SuperViT surpasses the recent state-of-the-art EViT by 1.1% when using DeiT-S as their backbones. The project of this work is made publicly available at https://github.com/lmbxmu/SuperViT.
引用
收藏
页码:3136 / 3151
页数:16
相关论文
共 55 条
  • [1] ViViT: A Video Vision Transformer
    Arnab, Anurag
    Dehghani, Mostafa
    Heigold, Georg
    Sun, Chen
    Lucic, Mario
    Schmid, Cordelia
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6816 - 6826
  • [2] Bertasius G, 2021, PR MACH LEARN RES, V139
  • [3] Cai H., 2019, INT C LEARNING REPRE
  • [4] Carion N., 2020, P EUR C COMP VIS GLA, P213, DOI DOI 10.1007/978-3-030-58452-813
  • [5] Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
    Chavan, Arnav
    Shen, Zhiqiang
    Liu, Zhuang
    Liu, Zechun
    Cheng, Kwang-Ting
    Xing, Eric
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4921 - 4931
  • [6] CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification
    Chen, Chun-Fu
    Fan, Quanfu
    Panda, Rameswar
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 347 - 356
  • [7] Chen M., 2022, ARXIV
  • [8] Mobile-Former: Bridging MobileNet and Transformer
    Chen, Yinpeng
    Dai, Xiyang
    Chen, Dongdong
    Liu, Mengchen
    Dong, Xiaoyi
    Yuan, Lu
    Liu, Zicheng
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5260 - 5269
  • [9] Chu XX, 2021, ADV NEUR IN
  • [10] Deformable Convolutional Networks
    Dai, Jifeng
    Qi, Haozhi
    Xiong, Yuwen
    Li, Yi
    Zhang, Guodong
    Hu, Han
    Wei, Yichen
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 764 - 773