ReViT: Vision Transformer Accelerator With Reconfigurable Semantic-Aware Differential Attention

被引:0
作者
Zou, Xiaofeng [1 ]
Chen, Cen [1 ,2 ]
Shao, Hongen [1 ]
Wang, Qinyu [1 ]
Zhuang, Xiaobin [1 ]
Li, Yangfan [3 ]
Li, Keqin [4 ]
机构
[1] South China Univ Technol, Sch Future Technol, Guangzhou 510641, Peoples R China
[2] Pazhou Lab, Guangzhou 510335, Peoples R China
[3] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Peoples R China
[4] SUNY Coll New Paltz, Dept Comp Sci, New Paltz, NY 12561 USA
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Semantics; Transformers; Visualization; Computer vision; Computational modeling; Attention mechanisms; Dogs; Computers; Snow; Performance evaluation; Hardware accelerator; vision transformers; software-hardware co-design; HIERARCHIES;
D O I
10.1109/TC.2024.3504263
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
While vision transformers (ViTs) have continued to achieve new milestones in computer vision, their complicated network architectures with high computation and memory costs have hindered their deployment on resource-limited edge devices. Some customized accelerators have been proposed to accelerate the execution of ViTs, achieving improved performance with reduced energy consumption. However, these approaches utilize flattened attention mechanisms and ignore the inherent hierarchical visual semantics in images. In this work, we conduct a thorough analysis of hierarchical visual semantics in real-world images, revealing opportunities and challenges of leveraging visual semantics to accelerate ViTs. We propose ReViT, a systematic algorithm and architecture co-design approach, which aims to exploit the visual semantics to accelerate ViTs. Our proposed algorithm can leverage the same semantic class with strong feature similarity to reduce computation and communication in a differential attention mechanism, and support the semantic-aware attention efficiently. A novel dedicated architecture is designed to support the proposed algorithm and translate it into performance improvements. Moreover, we propose an efficient execution dataflow to alleviate workload imbalance and maximize hardware utilization. ReViT opens new directions for accelerating ViTs by exploring the underlying visual semantics of images. ReViT gains an average of 2.3x speedup and 3.6x energy efficiency over state-of-the-art ViT accelerators.
引用
收藏
页码:1079 / 1093
页数:15
相关论文
共 45 条
[1]   Bit-Pragmatic Deep Neural Network Computing [J].
Albericio, Jorge ;
Delmas, Alberto ;
Judd, Patrick ;
Sharify, Sayeh ;
O'Leary, Gerard ;
Genov, Roman ;
Moshovos, Andreas .
50TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2017, :382-394
[2]  
Choromanski K, 2021, Arxiv, DOI arXiv:2009.14794
[3]   ConViT: improving vision transformers with soft convolutional inductive biases [J].
d'Ascoli, Stephane ;
Touvron, Hugo ;
Leavitt, Matthew L. ;
Morcos, Ari S. ;
Biroli, Giulio ;
Sagun, Levent .
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2022, 2022 (11)
[4]   UP-DETR: Unsupervised Pre-training for Object Detection with Transformers [J].
Dai, Zhigang ;
Cai, Bolun ;
Lin, Yugeng ;
Chen, Junying .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :1601-1610
[5]  
Dasgupta A., 2011, KDD, P1073, DOI DOI 10.1145/2020408.2020578
[6]   ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention [J].
Dass, Jyotikrishna ;
Wu, Shang ;
Shi, Huihong ;
Li, Chaojian ;
Ye, Zhifan ;
Wang, Zhongfeng ;
Lin, Yingyan .
2023 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE, HPCA, 2023, :415-428
[7]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[8]   HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers [J].
Dong, Peiyan ;
Sun, Mengshu ;
Lu, Alec ;
Xie, Yanyue ;
Liu, Kenneth ;
Kong, Zhenglun ;
Meng, Xin ;
Li, Zhengang ;
Lin, Xue ;
Fang, Zhenman ;
Wang, Yanzhi .
2023 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE, HPCA, 2023, :442-455
[9]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[10]   LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference [J].
Graham, Ben ;
El-Nouby, Alaaeldin ;
Touvron, Hugo ;
Stock, Pierre ;
Joulin, Armand ;
Jegou, Herve ;
Douze, Matthijs .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :12239-12249