ScopeViT: Scale-Aware Vision Transformer

被引:5
作者
Nie, Xuesong [1 ]
Jin, Haoyuan [1 ]
Yan, Yunfeng [1 ]
Chen, Xi [2 ]
Zhu, Zhihang [1 ]
Qi, Donglian [1 ]
机构
[1] Zhejiang Univ, Hangzhou 310027, Peoples R China
[2] Univ Hong Kong, Hong Kong 999077, Peoples R China
关键词
Vision transformer; Multi-scale features; Efficient attention mechanism;
D O I
10.1016/j.patcog.2024.110470
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-scale features are essential for various vision tasks, such as classification, detection, and segmentation. Although Vision Transformers (ViTs) show remarkable success in capturing global features within an image, how to leverage multi-scale features in Transformers is not well explored. This paper proposes a scale-aware vision Transformer called ScopeViT that efficiently captures multi-granularity representations. Two novel attention with lightweight computation are introduced: Multi-Scale Self-Attention (MSSA) and Global-Scale Dilated Attention (GSDA). MSSA embeds visual tokens with different receptive fields into distinct attention heads, allowing the model to perceive various scales across the network. GSDA enhances model understanding of the global context through token-dilation operation, which reduces the number of tokens involved in attention computations. This dual attention method enables ScopeViT to "see"various scales throughout the entire network and effectively learn inter -object relationships, reducing heavy quadratic computational complexity. Extensive experiments demonstrate that ScopeViT achieves competitive complexity/accuracy tradeoffs compared to existing networks across a wide range of visual tasks. On the ImageNet-1K dataset, ScopeViT achieves a top-1 accuracy of 81.1%, using only 7.4M parameters and 2.0G FLOPs. Our approach outperforms Swin (ViT-based) by 1.9% accuracy while saving 42% of the parameters, outperforms MobileViTv2 (Hybridbased) with a 0.7% accuracy gain while using 50% of the computations, and also beats ConvNeXt V2 (ConvNet-based) by 0.8% with fewer parameters.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] What Makes for Hierarchical Vision Transformer?
    Fang, Yuxin
    Wang, Xinggang
    Wu, Rui
    Liu, Wenyu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12714 - 12720
  • [32] Vision Transformer in Industrial Visual Inspection
    Hutten, Nils
    Meyes, Richard
    Meisen, Tobias
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [33] The Application of Vision Transformer in Image Classification
    He, Zhixuan
    2022 THE 6TH INTERNATIONAL CONFERENCE ON VIRTUAL AND AUGMENTED REALITY SIMULATIONS, ICVARS 2022, 2022, : 56 - 63
  • [34] Convolutional Neural Network and Vision Transformer-driven Cross-layer Multi-scale Fusion Network for Hyperspectral Image Classification
    Zhao F.
    Geng M.
    Liu H.
    Zhang J.
    Yu J.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (05): : 2237 - 2248
  • [35] Video Summarization With Spatiotemporal Vision Transformer
    Hsu, Tzu-Chun
    Liao, Yi-Sheng
    Huang, Chun-Rong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3013 - 3026
  • [36] Ensemble Vision Transformer for Dementia Diagnosis
    Huang, Fei
    Qiu, Anqi
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (09) : 5551 - 5561
  • [37] Vision Transformer for femur fracture classification
    Tanzi, Leonardo
    Audisio, Andrea
    Cirrincione, Giansalvo
    Aprato, Alessandro
    Vezzetti, Enrico
    INJURY-INTERNATIONAL JOURNAL OF THE CARE OF THE INJURED, 2022, 53 (07): : 2625 - 2634
  • [38] DiagSWin: A multi-scale vision transformer with diagonal-shaped windows for object detection and segmentation
    Li, Ke
    Wang, Di
    Liu, Gang
    Zhu, Wenxuan
    Zhong, Haodi
    Wang, Quan
    NEURAL NETWORKS, 2024, 180
  • [39] Multi-scale vision transformer classification model with self-supervised learning and dilated convolution
    Xing, Liping
    Jin, Hongmei
    Li, Hong-an
    Li, Zhanli
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 103
  • [40] Multi-scale Knowledge Transfer Vision Transformer for 3D vessel shape segmentation
    Hua, Michael J.
    Wu, Junjie
    Zhong, Zichun
    COMPUTERS & GRAPHICS-UK, 2024, 122