Compressing Large-Scale Transformer-Based Models: A Case Study on BERT

被引:69
作者
Ganesh, Prakhar [1 ]
Chen, Yao [1 ]
Lou, Xin [1 ]
Khan, Mohammad Ali [1 ]
Yang, Yin [2 ]
Sajjad, Hassan [3 ]
Nakov, Preslav [3 ]
Chen, Deming [4 ]
Winslett, Marianne [4 ]
机构
[1] Adv Digital Sci Ctr, Singapore, Singapore
[2] Hamad Bin Khalifa Univ, Coll Sci & Engn, Ar Rayyan, Qatar
[3] Hamad Bin Khalifa Univ, Qatar Comp Res Inst, Ar Rayyan, Qatar
[4] Univ Illinois, Urbana, IL USA
基金
新加坡国家研究基金会;
关键词
All Open Access; Gold; Green;
D O I
10.1162/tacl_a_00413
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resourcehungry and computation-intensive to suit lowcapability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted considerable research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.
引用
收藏
页码:1061 / 1080
页数:20
相关论文
共 82 条
  • [1] [Anonymous], 2019, Revealing the Dark Secrets of BERT
  • [2] Ben Noach M, 2020, 1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), P884
  • [3] Boo Y, 2020, INT CONF ACOUST SPEE, P1753, DOI [10.1109/icassp40776.2020.9054724, 10.1109/ICASSP40776.2020.9054724]
  • [4] Brown TB, 2020, ADV NEUR IN, V33
  • [5] Cao QQ, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P4487
  • [6] Chen Daoyuan, P 20 9 INT JOINT C A, P2463, DOI [10.24963/ijcai.2020/341, DOI 10.24963/IJCAI.2020/341]
  • [7] Chen Tianlong, 2020, ADV NEURAL INFORM PR, P1753
  • [8] Model Compression and Acceleration for Deep Neural Networks The principles, progress, and challenges
    Cheng, Yu
    Wang, Duo
    Zhou, Pan
    Zhang, Tao
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (01) : 126 - 136
  • [9] Chia Yew Ken, 2018, P COMP DEEP NEUR NET
  • [10] Chumachenko Artem, 2020, WEIGHT SQUEEZING REP