MG-ViT: A Multi-Granularity Method for Compact and Efficient Vision Transformers

被引:0
|
作者
Zhang, Yu [1 ]
Liu, Yepeng [2 ]
Miao, Duoqian [1 ]
Zhang, Qi [1 ]
Shi, Yiwei [3 ]
Hu, Liang [1 ]
机构
[1] Tongji Univ, Shanghai, Peoples R China
[2] Univ Florida, Gainesville, FL 32611 USA
[3] Univ Bristol, Bristol BS81TH, Avon, England
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision Transformer (ViT) faces obstacles in wide application due to its huge computational cost. Almost all existing studies on compressing ViT adopt the manner of splitting an image with a single granularity, with very few exploration of splitting an image with multi-granularity. As we know, important information often randomly concentrate in few regions of an image, necessitating multi-granularity attention allocation to an image. Enlightened by this, we introduce the multi-granularity strategy to compress ViT, which is simple but effective. We propose a two-stage multi-granularity framework, MG-ViT, to balance ViT's performance and computational cost. In single-granularity inference stage, an input image is split into a small number of patches for simple inference. If necessary, multi-granularity inference stage will be instigated, where the important patches are further subsplit into multi-finer-grained patches for subsequent inference. Moreover, prior studies on compression only for classification, while we extend the multi-granularity strategy to hierarchical ViT for downstream tasks such as detection and segmentation. Extensive experiments Prove the effectiveness of the multi-granularity strategy. For instance, on ImageNet, without any loss of performance, MG-ViT reduces 47% FLOPs of LV-ViT-S and 56% FLOPs of DeiT-S.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] MgMViT: Multi-Granularity and Multi-Scale Vision Transformer for Efficient Action Recognition
    Huo, Hua
    Li, Bingjie
    ELECTRONICS, 2024, 13 (05)
  • [2] An efficient selector for multi-granularity attribute reduction
    Liu, Keyu
    Yang, Xibei
    Fujita, Hamido
    Liu, Dun
    Yang, Xin
    Qian, Yuhua
    INFORMATION SCIENCES, 2019, 505 : 457 - 472
  • [3] The Method of Analysis Granularity Determination for Multi-granularity Time Series
    Chen, Hailan
    Gao, Xuedong
    Du, Qiangbo
    2018 8TH INTERNATIONAL CONFERENCE ON LOGISTICS, INFORMATICS AND SERVICE SCIENCES (LISS), 2018,
  • [4] Research on the multi-granularity method of role engineering
    Jiao, Yongmei
    Zhang, Menghan
    Wu, Yu
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2022, 16
  • [5] A Causal Disentangled Multi-granularity Graph Classification Method
    Li, Yuan
    Liu, Li
    Chen, Penggang
    Zhang, Youmin
    Wang, Guoyin
    ROUGH SETS, IJCRS 2023, 2023, 14481 : 354 - 368
  • [6] Irregular object simplify method based on multi-granularity
    Liao, Xiaoping
    Xiao, Haihua
    Ma, Junyan
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON MECHATRONICS AND INDUSTRIAL INFORMATICS, 2015, 31 : 535 - 541
  • [7] A Multi-Granularity Semantic Extraction Method for Text Classification
    Li, Min
    Liu, Zeyu
    Li, Gang
    Han, Delong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XIII, ICIC 2024, 2024, 14874 : 224 - 236
  • [8] Multi-Granularity Partial Encryption Method of CAD Model
    Cai, X. T.
    He, F. Z.
    Li, W. D.
    Li, X. X.
    Wu, Y. Q.
    PROCEEDINGS OF THE 2013 IEEE 17TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN (CSCWD), 2013, : 23 - 30
  • [9] Multi-granularity Design Rationale Knowledge Modeling Method
    Wang, Jiaji
    Liu, Jihong
    Xu, Wenting
    PROCEEDINGS OF 2019 IEEE 3RD INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2019), 2019, : 2566 - 2570
  • [10] Meso-Granularity Labeled Method for Multi-Granularity Formal Concept Analysis
    Li J.
    Li Y.
    Mi Y.
    Wu W.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (02): : 447 - 458