RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers

被引:16
作者
Li, Zhikai [1 ,2 ]
Xiao, Junrui [1 ,2 ]
Yang, Lianwei [1 ,2 ]
Gu, Qingyi [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023) | 2023年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV51070.2023.01580
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Post-training quantization (PTQ), which only requires a tiny dataset for calibration without end-to-end retraining, is a light and practical model compression technique. Recently, several PTQ schemes for vision transformers (ViTs) have been presented; unfortunately, they typically suffer from non-trivial accuracy degradation, especially in lowbit cases. In this paper, we propose RepQ-ViT, a novel PTQ framework for ViTs based on quantization scale reparameterization, to address the above issues. RepQ-ViT decouples the quantization and inference processes, where the former employs complex quantizers and the latter employs scalereparameterized simplified quantizers. This ensures both accurate quantization and efficient inference, which distinguishes it from existing approaches that sacrifice quantization performance to meet the target hardware. More specifically, we focus on two components with extreme distributions: post-LayerNorm activations with severe interchannel variation and post-Softmax activations with powerlaw features, and initially apply channel-wise quantization and log 2 quantization, respectively. Then, we reparameterize the scales to hardware-friendly layer-wise quantization and log2 quantization for inference, with only slight accuracy or computational costs. Extensive experiments are conducted on multiple vision tasks with different model variants, proving that RepQ-ViT, without hyperparameters and expensive reconstruction procedures, can outperform existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ of ViTs to a usable level. Code is available at https://github.com/zkkli/RepQ-ViT.
引用
收藏
页码:17181 / 17190
页数:10
相关论文
共 38 条
  • [1] [Anonymous], 2016, Analysis of Influencing Factors and Prediction of Soil Organic Carbon at Agricultural Landscape in Hilly Area
  • [2] [Anonymous], 2019, P IEEE CVF C COMP VI, DOI DOI 10.1109/COASE.2019.8842883
  • [3] [Anonymous], 2022, EUR C COMP VIS, DOI DOI 10.1007/978-3-031-20083-0_10
  • [4] Carion N., 2020, P EUR C COMP VIS GLA, P213, DOI DOI 10.1007/978-3-030-58452-813
  • [5] Chuang P. I-Jen, 2018, Proceedings of the 9th International Symposium on Highly-Efficient Accelerators and Reconfigurable Technologies
  • [6] Towards Accurate Post-Training Quantization for Vision Transformer
    Ding, Yifu
    Qin, Haotong
    Yan, Qinghua
    Chai, Zhenhua
    Liu, Junjie
    Wei, Xiaolin
    Liu, Xianglong
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5380 - 5388
  • [7] Dosovitskiy A., 2020, PREPRINT
  • [8] Esser Steven K, 2019, ARXIV PREPRINT ARXIV
  • [9] Coordination Chemistry Engineered Polymeric Carbon Nitride Photoanode with Ultralow Onset Potential for Water Splitting
    Fan, Xiangqian
    Wang, Zhiliang
    Lin, Tongen
    Du, Du
    Xiao, Mu
    Chen, Peng
    Monny, Sabiha Akter
    Huang, Hengming
    Lyu, Miaoqiang
    Lu, Mingyuan
    Wang, Lianzhou
    [J]. ANGEWANDTE CHEMIE-INTERNATIONAL EDITION, 2022, 61 (32)
  • [10] Gholami A., 2022, P LOW POW COMP VIS L, P291