Compressing Pre-trained Models of Code into 3 MB

被引:12
|
作者
Shi, Jieke [1 ]
Yang, Zhou [1 ]
Xu, Bowen [1 ]
Kang, Hong Jin [1 ]
Lo, David [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore, Singapore
来源
PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022 | 2022年
基金
新加坡国家研究基金会;
关键词
Model Compression; Genetic Algorithm; Pre-Trained Models;
D O I
10.1145/3551349.3556964
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers' daily workflow: these large models consume hundreds of megabytes of memory and run slowly on personal devices, which causes problems in model deployment and greatly degrades the user experience. It motivates us to propose Compressor, a novel approach that can compress the pre-trained models of code into extremely small models with negligible performance sacrifice. Our proposed method formulates the design of tiny models as simplifying the pre-trained model architecture: searching for a significantly smaller model that follows an architectural design similar to the original pre-trained model. Compressor proposes a genetic algorithm (GA)-based strategy to guide the simplification process. Prior studies found that a model with higher computational cost tends to be more powerful. Inspired by this insight, the GA algorithm is designed to maximize a model's Giga floating-point operations (GFLOPs), an indicator of the model computational cost, to satisfy the constraint of the target model size. Then, we use the knowledge distillation technique to train the small model: unlabelled data is fed into the large model and the outputs are used as labels to train the small model. We evaluate Compressor with two state-of-the-art pre-trained models, i.e., CodeBERT and GraphCodeBERT, on two important tasks, i.e., vulnerability prediction and clone detection. We use our method to compress pre-trained models to a size (3 MB), which is 160x smaller than the original size. The results show that compressed CodeBERT and GraphCodeBERT are 4.31x and 4.15x faster than the original model at inference, respectively. More importantly, they maintain 96.15% and 97.74% of the original performance on the vulnerability prediction task. They even maintain higher ratios (99.20% and 97.52%) of the original performance on the clone detection task.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Assessment of Convolutional Neural Network Pre-Trained Models for Detection and Orientation of Cracks
    Qayyum, Waqas
    Ehtisham, Rana
    Bahrami, Alireza
    Camp, Charles
    Mir, Junaid
    Ahmad, Afaq
    MATERIALS, 2023, 16 (02)
  • [22] Exploiting Word Semantics to Enrich Character Representations of Chinese Pre-trained Models
    Li, Wenbiao
    Sun, Rui
    Wu, Yunfang
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT I, 2022, 13551 : 3 - 15
  • [23] Comprehensive study of pre-trained language models: detecting humor in news headlines
    Farah Shatnawi
    Malak Abdullah
    Mahmoud Hammad
    Mahmoud Al-Ayyoub
    Soft Computing, 2023, 27 : 2575 - 2599
  • [24] GenDistiller: Distilling Pre-trained Language Models based on an Autoregressive Generative Model
    Gao, Yingying
    Zhang, Shilei
    Deng, Chao
    Feng, Junlan
    INTERSPEECH 2024, 2024, : 3325 - 3329
  • [25] SAR Image Despeckling Using Pre-trained Convolutional Neural Network Models
    Yang, Xiangli
    Denis, Loic
    Tupin, Florence
    Yang, Wen
    2019 JOINT URBAN REMOTE SENSING EVENT (JURSE), 2019,
  • [26] Pre-trained models for detection and severity level classification of dysarthria from speech
    Javanmardi, Farhad
    Kadiri, Sudarsana Reddy
    Alku, Paavo
    SPEECH COMMUNICATION, 2024, 158
  • [27] Pre-Trained Models Based Receiver Design With Natural Redundancy for Chinese Characters
    Wang, Zhen-Yu
    Yu, Hong-Yi
    Shen, Cai-Yao
    Zhu, Zhao-Rui
    Shen, Zhi-Xiang
    Du, Jian-Ping
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (10) : 2350 - 2354
  • [28] TARGET SPEECH EXTRACTION WITH PRE-TRAINED SELF-SUPERVISED LEARNING MODELS
    Peng, Junyi
    Delcroix, Marc
    Ochiai, Tsubasa
    Plchot, Oldrich
    Araki, Shoko
    Cemocky, Jan
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024), 2024, : 10421 - 10425
  • [29] Semi-supervised speaker verification system based on pre-trained models
    Li, Yishuang
    Chen, Zhicong
    Miao, Shiyu
    Su, Qi
    Li, Lin
    Hong, Qingyang
    Qinghua Daxue Xuebao/Journal of Tsinghua University, 2024, 64 (11): : 1936 - 1943
  • [30] EnsUNet: Enhancing Brain Tumor Segmentation Through Fusion of Pre-trained Models
    Laouamer, Ilhem
    Aiadi, Oussama
    Kherfi, Mohammed Lamine
    Cheddad, Abbas
    Amirat, Hanane
    Laouamer, Lamri
    Drid, Khaoula
    PROCEEDINGS OF NINTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, ICICT 2024, VOL 3, 2024, 1013 : 163 - 174