Boost Transformer-based Language Models with GPU-Friendly Sparsity and Quantization

被引:0
|
作者
Yu, Chong [1 ]
Chen, Tao [2 ]
Gan, Zhongxue [1 ]
机构
[1] Fudan Univ, Acad Engn & Technol, Shanghai, Peoples R China
[2] Fudan Univ, Sch Informat Sci & Technol, Shanghai, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Along with the performance improvement in NLP domain, the sizes of transformer-based language models (TLM) are also dramatically increased. Some prior works intend to compress TLM models into more compact forms, but do not fully consider the hardware characters may not support the efficient execution for these forms, leading to the deployment of TLM on hardware with noticeable acceleration is still challenging. This paper thoroughly designs a compression scheme named GPUSQ-TLM to maximally utilize the GPU-friendly 2:4 fine-grained structured sparsity and quantization characters. Especially, a dense TLM model is first pruned to meet the GPU's acceleration constraint of sparse patterns with FP16 type, then it is further quantized into a fixed-point one by quantization-aware training, to provide an extra speedup for integer tensors on GPU. A mixed-strategy knowledge distillation of labels, logits and feature maps is used for best accuracy compensation during pruning and quantization process. Experiment results show GPUSQ-TLM scheme achieves state-of-the-art compression on TLM model of various encoder and decoder blocks with negligible accuracy degradation on SQuAD, GLUE, CNN-DM & XSum and WikiText benchmarking tasks. Moreover, GPUSQ-TLM can boost actual deployment performance by up to 4.08-4.25x times latency and 6.18-6.79x throughput on A100 GPU.
引用
收藏
页码:218 / 235
页数:18
相关论文
共 50 条
  • [31] Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping
    Zhang, Minjia
    He, Yuxiong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [32] Arlo: Serving Transformer-based Language Models with Dynamic Input Lengths
    Tan, Xin
    Li, Jiamin
    Yang, Yitao
    Li, Jingzong
    Xu, Hong
    53RD INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2024, 2024, : 367 - 376
  • [33] Enhancing Address Data Integrity using Transformer-Based Language Models
    Kurklu, Omer Faruk
    Akagiunduz, Erdem
    32ND IEEE SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU 2024, 2024,
  • [34] EEG Classification with Transformer-Based Models
    Sun, Jiayao
    Xie, Jin
    Zhou, Huihui
    2021 IEEE 3RD GLOBAL CONFERENCE ON LIFE SCIENCES AND TECHNOLOGIES (IEEE LIFETECH 2021), 2021, : 92 - 93
  • [35] Quantifying the Bias of Transformer-Based Language Models for African American English in Masked Language Modeling
    Salutari, Flavia
    Ramos, Jerome
    Rahmani, Hossein A.
    Linguaglossa, Leonardo
    Lipani, Aldo
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT I, 2023, 13935 : 532 - 543
  • [36] Incorporating Medical Knowledge to Transformer-based Language Models for Medical Dialogue Generation
    Naseem, Usman
    Bandi, Ajay
    Raza, Shaina
    Rashid, Junaid
    Chakravarthi, Bharathi Raja
    PROCEEDINGS OF THE 21ST WORKSHOP ON BIOMEDICAL LANGUAGE PROCESSING (BIONLP 2022), 2022, : 110 - 115
  • [37] Task-Specific Transformer-Based Language Models in HealthCare:Scoping Review
    Cho, Ha Na
    Jun, Tae Joon
    Kim, Young-Hak
    Kang, Heejun
    Ahn, Imjin
    Gwon, Hansle
    Kim, Yunha
    Seo, Jiahn
    Choi, Heejung
    Kim, Minkyoung
    Han, Jiye
    Kee, Gaeun
    Park, Seohyun
    Ko, Soyoung
    JMIR MEDICAL INFORMATICS, 2024, 12
  • [38] A Comparative Analysis of Transformer-based Protein Language Models for Remote Homology Prediction
    Kabir, Anowarul
    Moldwin, Asher
    Shehu, Amarda
    14TH ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, BCB 2023, 2023,
  • [39] Transformer-based Language Models and Homomorphic Encryption: An Intersection with BERT-tiny
    Rovida, Lorenzo
    Leporati, Alberto
    PROCEEDINGS OF THE 10TH ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS, IWSPA 2024, 2024, : 3 - 13
  • [40] Empirical Study of Tweets Topic Classification Using Transformer-Based Language Models
    Mandal, Ranju
    Chen, Jinyan
    Becken, Susanne
    Stantic, Bela
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2021, 2021, 12672 : 340 - 350