Compressing Pre-trained Models of Code into 3 MB

被引:12
|
作者
Shi, Jieke [1 ]
Yang, Zhou [1 ]
Xu, Bowen [1 ]
Kang, Hong Jin [1 ]
Lo, David [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore, Singapore
来源
PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022 | 2022年
基金
新加坡国家研究基金会;
关键词
Model Compression; Genetic Algorithm; Pre-Trained Models;
D O I
10.1145/3551349.3556964
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers' daily workflow: these large models consume hundreds of megabytes of memory and run slowly on personal devices, which causes problems in model deployment and greatly degrades the user experience. It motivates us to propose Compressor, a novel approach that can compress the pre-trained models of code into extremely small models with negligible performance sacrifice. Our proposed method formulates the design of tiny models as simplifying the pre-trained model architecture: searching for a significantly smaller model that follows an architectural design similar to the original pre-trained model. Compressor proposes a genetic algorithm (GA)-based strategy to guide the simplification process. Prior studies found that a model with higher computational cost tends to be more powerful. Inspired by this insight, the GA algorithm is designed to maximize a model's Giga floating-point operations (GFLOPs), an indicator of the model computational cost, to satisfy the constraint of the target model size. Then, we use the knowledge distillation technique to train the small model: unlabelled data is fed into the large model and the outputs are used as labels to train the small model. We evaluate Compressor with two state-of-the-art pre-trained models, i.e., CodeBERT and GraphCodeBERT, on two important tasks, i.e., vulnerability prediction and clone detection. We use our method to compress pre-trained models to a size (3 MB), which is 160x smaller than the original size. The results show that compressed CodeBERT and GraphCodeBERT are 4.31x and 4.15x faster than the original model at inference, respectively. More importantly, they maintain 96.15% and 97.74% of the original performance on the vulnerability prediction task. They even maintain higher ratios (99.20% and 97.52%) of the original performance on the clone detection task.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Effective Utilization of Pre-Trained Models in Ad-Hoc Video Search
    Ueki K.
    Suzuki Y.
    Hori T.
    Takushima H.
    Okamoto H.
    Tanoue H.
    Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering, 2023, 89 (12): : 926 - 933
  • [32] Comprehensive study of pre-trained language models: detecting humor in news headlines
    Shatnawi, Farah
    Abdullah, Malak
    Hammad, Mahmoud
    Al-Ayyoub, Mahmoud
    SOFT COMPUTING, 2023, 27 (05) : 2575 - 2599
  • [33] Deep Pre-trained Models for Computer Vision Applications: Traffic sign recognition
    Bouaafia, Soulef
    Messaoud, Seifeddine
    Maraoui, Amna
    Ammari, Ahmed Chiheb
    Khriji, Lazhar
    Machhout, Mohsen
    2021 18TH INTERNATIONAL MULTI-CONFERENCE ON SYSTEMS, SIGNALS & DEVICES (SSD), 2021, : 23 - 28
  • [34] Pre-trained Gaussian Processes for Bayesian Optimization
    Wang, Zi
    Dahl, George E.
    Swersky, Kevin
    Lee, Chansoo
    Nado, Zachary
    Gilmer, Justin
    Snoek, Jasper
    Ghahramani, Zoubin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [35] ImageNet Pre-trained CNNs for JPEG Steganalysis
    Yousfi, Yassine
    Butora, Jan
    Khvedchenya, Eugene
    Fridrich, Jessica
    2020 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2020,
  • [36] FastPTM: Fast weights loading of pre-trained models for parallel inference service provisioning
    Cai, Fenglong
    Yuan, Dong
    Yang, Zhe
    Xu, Yonghui
    He, Wei
    Guo, Wei
    Cui, Lizhen
    PARALLEL COMPUTING, 2024, 122
  • [37] GAIT analysis based on GENDER detection using pre-trained models and tune parameters
    Vora C.
    Katkar V.
    Lunagaria M.
    Discover Artificial Intelligence, 2024, 4 (01):
  • [38] Hippocampus segmentation and classification for dementia analysis using pre-trained neural network models
    Priyanka, Ahana
    Ganesan, Kavitha
    BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK, 2021, 66 (06): : 581 - 592
  • [39] Explainable Pre-Trained Language Models for Sentiment Analysis in Low-Resourced Languages
    Mabokela, Koena Ronny
    Primus, Mpho
    Celik, Turgay
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (11)
  • [40] Evaluation and optimisation of pre-trained CNN models for asphalt pavement crack detection and classification
    Matarneh, Sandra
    Elghaish, Faris
    Rahimian, Farzad Pour
    Abdellatef, Essam
    Abrishami, Sepehr
    AUTOMATION IN CONSTRUCTION, 2024, 160