Compressing Pre-trained Models of Code into 3 MB

被引:12
|
作者
Shi, Jieke [1 ]
Yang, Zhou [1 ]
Xu, Bowen [1 ]
Kang, Hong Jin [1 ]
Lo, David [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore, Singapore
来源
PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022 | 2022年
基金
新加坡国家研究基金会;
关键词
Model Compression; Genetic Algorithm; Pre-Trained Models;
D O I
10.1145/3551349.3556964
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers' daily workflow: these large models consume hundreds of megabytes of memory and run slowly on personal devices, which causes problems in model deployment and greatly degrades the user experience. It motivates us to propose Compressor, a novel approach that can compress the pre-trained models of code into extremely small models with negligible performance sacrifice. Our proposed method formulates the design of tiny models as simplifying the pre-trained model architecture: searching for a significantly smaller model that follows an architectural design similar to the original pre-trained model. Compressor proposes a genetic algorithm (GA)-based strategy to guide the simplification process. Prior studies found that a model with higher computational cost tends to be more powerful. Inspired by this insight, the GA algorithm is designed to maximize a model's Giga floating-point operations (GFLOPs), an indicator of the model computational cost, to satisfy the constraint of the target model size. Then, we use the knowledge distillation technique to train the small model: unlabelled data is fed into the large model and the outputs are used as labels to train the small model. We evaluate Compressor with two state-of-the-art pre-trained models, i.e., CodeBERT and GraphCodeBERT, on two important tasks, i.e., vulnerability prediction and clone detection. We use our method to compress pre-trained models to a size (3 MB), which is 160x smaller than the original size. The results show that compressed CodeBERT and GraphCodeBERT are 4.31x and 4.15x faster than the original model at inference, respectively. More importantly, they maintain 96.15% and 97.74% of the original performance on the vulnerability prediction task. They even maintain higher ratios (99.20% and 97.52%) of the original performance on the clone detection task.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Natural Attack for Pre-trained Models of Code
    Yang, Zhou
    Shi, Jieke
    He, Junda
    Lo, David
    2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 1482 - 1493
  • [2] Diet Code Is Healthy: Simplifying Programs for Pre-trained Models of Code
    Zhang, Zhaowei
    Zhang, Hongyu
    Shen, Beijun
    Gu, Xiaodong
    PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1073 - 1084
  • [3] Assessing and improving syntactic adversarial robustness of pre-trained models for code translation
    Yang, Guang
    Zhou, Yu
    Zhang, Xiangyu
    Chen, Xiang
    Han, Tingting
    Chen, Taolue
    INFORMATION AND SOFTWARE TECHNOLOGY, 2025, 181
  • [4] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    ENGINEERING, 2023, 25 : 51 - 65
  • [5] RAPID: Zero-Shot Domain Adaptation for Code Search with Pre-Trained Models
    Fan, Guodong
    Chen, Shizhan
    Gao, Cuiyun
    Xiao, Jianmao
    Zhang, Tao
    Feng, Zhiyong
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (05)
  • [6] Pre-trained models: Past, present and future
    Han, Xu
    Zhang, Zhengyan
    Ding, Ning
    Gu, Yuxian
    Liu, Xiao
    Huo, Yuqi
    Qiu, Jiezhong
    Yao, Yuan
    Zhang, Ao
    Zhang, Liang
    Han, Wentao
    Huang, Minlie
    Jin, Qin
    Lan, Yanyan
    Liu, Yang
    Liu, Zhiyuan
    Lu, Zhiwu
    Qiu, Xipeng
    Song, Ruihua
    Tang, Jie
    Wen, Ji-Rong
    Yuan, Jinhui
    Zhao, Wayne Xin
    Zhu, Jun
    AI OPEN, 2021, 2 : 225 - 250
  • [7] HinPLMs: Pre-trained Language Models for Hindi
    Huang, Xixuan
    Lin, Nankai
    Li, Kexin
    Wang, Lianxi
    Gan, Suifu
    2021 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2021, : 241 - 246
  • [8] Text clustering based on pre-trained models and autoencoders
    Xu, Qiang
    Gu, Hao
    Ji, ShengWei
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2024, 17
  • [9] A Survey on Time-Series Pre-Trained Models
    Ma, Qianli
    Liu, Zhen
    Zheng, Zhenjing
    Huang, Ziyang
    Zhu, Siying
    Yu, Zhongzhong
    Kwok, James T.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 7536 - 7555
  • [10] Universal embedding for pre-trained models and data bench
    Cho, Namkyeong
    Cho, Taewon
    Shin, Jaesun
    Jeon, Eunjoo
    Lee, Taehee
    NEUROCOMPUTING, 2025, 619