Compressing Pre-trained Models of Code into 3 MB

被引:12
|
作者
Shi, Jieke [1 ]
Yang, Zhou [1 ]
Xu, Bowen [1 ]
Kang, Hong Jin [1 ]
Lo, David [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore, Singapore
来源
PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022 | 2022年
基金
新加坡国家研究基金会;
关键词
Model Compression; Genetic Algorithm; Pre-Trained Models;
D O I
10.1145/3551349.3556964
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers' daily workflow: these large models consume hundreds of megabytes of memory and run slowly on personal devices, which causes problems in model deployment and greatly degrades the user experience. It motivates us to propose Compressor, a novel approach that can compress the pre-trained models of code into extremely small models with negligible performance sacrifice. Our proposed method formulates the design of tiny models as simplifying the pre-trained model architecture: searching for a significantly smaller model that follows an architectural design similar to the original pre-trained model. Compressor proposes a genetic algorithm (GA)-based strategy to guide the simplification process. Prior studies found that a model with higher computational cost tends to be more powerful. Inspired by this insight, the GA algorithm is designed to maximize a model's Giga floating-point operations (GFLOPs), an indicator of the model computational cost, to satisfy the constraint of the target model size. Then, we use the knowledge distillation technique to train the small model: unlabelled data is fed into the large model and the outputs are used as labels to train the small model. We evaluate Compressor with two state-of-the-art pre-trained models, i.e., CodeBERT and GraphCodeBERT, on two important tasks, i.e., vulnerability prediction and clone detection. We use our method to compress pre-trained models to a size (3 MB), which is 160x smaller than the original size. The results show that compressed CodeBERT and GraphCodeBERT are 4.31x and 4.15x faster than the original model at inference, respectively. More importantly, they maintain 96.15% and 97.74% of the original performance on the vulnerability prediction task. They even maintain higher ratios (99.20% and 97.52%) of the original performance on the clone detection task.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Analyzing Fine-Tune Pre-trained Models for Detecting Cucumber Plant Growth
    Hari, Pragya
    Singh, Maheshwari Prasad
    ADVANCED NETWORK TECHNOLOGIES AND INTELLIGENT COMPUTING, ANTIC 2022, PT II, 2023, 1798 : 510 - 521
  • [42] Efficient Key-Based Adversarial Defense for ImageNet by Using Pre-Trained Models
    Maungmaung, Aprilpyone
    Echizen, Isao
    Kiya, Hitoshi
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2024, 5 : 902 - 913
  • [43] Textual Pre-Trained Models for Age Screening Across Community Question-Answering
    Figueroa, Alejandro
    Timilsina, Mohan
    IEEE ACCESS, 2024, 12 : 30030 - 30038
  • [44] Incorporating Pre-trained Transformer Models into TextCNN for Sentiment Analysis on Software Engineering Texts
    Sun, Kexin
    Shi, XiaoBo
    Gao, Hui
    Kuang, Hongyu
    Ma, Xiaoxing
    Rong, Guoping
    Shao, Dong
    Zhao, Zheng
    Zhang, He
    13TH ASIA-PACIFIC SYMPOSIUM ON INTERNETWARE, INTERNETWARE 2022, 2022, : 127 - 136
  • [45] Re-ranking Biomedical Literature for Precision Medicine with Pre-trained Neural Models
    Jiazhao Li
    Murali, Adharsh
    Qiaozhu Mei
    Vydiswaran, V. G. Vinod
    2020 8TH IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2020), 2020, : 511 - 513
  • [46] Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts
    Kolides, Adam
    Nawaz, Alyna
    Rathor, Anshu
    Beeman, Denzel
    Hashmi, Muzammil
    Fatima, Sana
    Berdik, David
    Al-Ayyoub, Mahmoud
    Jararweh, Yaser
    SIMULATION MODELLING PRACTICE AND THEORY, 2023, 126
  • [47] Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning
    Bu, Kun
    Liu, Yuanchao
    Ju, Xiaolong
    KNOWLEDGE-BASED SYSTEMS, 2024, 283
  • [48] DenseNet-201 and Xception Pre-Trained Deep Learning Models for Fruit Recognition
    Salim, Farsana
    Saeed, Faisal
    Basurra, Shadi
    Qasem, Sultan Noman
    Al-Hadhrami, Tawfik
    ELECTRONICS, 2023, 12 (14)
  • [49] Selective Layer Tuning and Performance Study of Pre-Trained Models Using Genetic Algorithm
    Jeong, Jae-Cheol
    Yu, Gwang-Hyun
    Song, Min-Gyu
    Dang Thanh Vu
    Le Hoang Anh
    Jung, Young-Ae
    Choi, Yoon-A
    Um, Tai-Won
    Kim, Jin-Young
    ELECTRONICS, 2022, 11 (19)
  • [50] Pre-Trained Models for Search and Recommendation: Introduction to the Special Issue-Part 1
    Wang, Wenjie
    Liu, Zheng
    Feng, Fuli
    Dou, Zhicheng
    Ai, Qingyao
    Yang, Grace Hui
    Lian, Defu
    Hou, Lu
    Sun, Aixin
    Zamani, Hamed
    Metzler, Donald
    de Rijke, Maarten
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2025, 43 (02)