No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence

被引:55
|
作者
Wang, Chaozheng [1 ]
Yang, Yuanhang [1 ]
Gao, Cuiyun [1 ,4 ,5 ]
Peng, Yun [2 ]
Zhang, Hongyu [3 ]
Lyu, Michael R. [2 ]
机构
[1] Harbin Inst Technol, Shenzhen, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[3] Univ Newcastle, Newcastle, NSW, Australia
[4] Peng Cheng Lab, Shenzhen, Peoples R China
[5] Guangdong Prov Key Lab Novel Secur Intelligence T, Shenzhen, Peoples R China
来源
PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022 | 2022年
基金
中国国家自然科学基金;
关键词
code intelligence; prompt tuning; empirical study;
D O I
10.1145/3540250.3549113
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Pre-trained models have been shown effective in many code intelligence tasks. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream data, while in practice, the scenarios with scarce data are common. Recent studies in the natural language processing (NLP) field show that prompt tuning, a new paradigm for tuning, alleviates the above issues and achieves promising results in various NLP tasks. In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data. In this paper, we empirically evaluate the usage and effect of prompt tuning in code intelligence tasks. We conduct prompt tuning on popular pre-trained models CodeBERT and CodeT5 and experiment with three code intelligence tasks including defect prediction, code summarization, and code translation. Our experimental results show that prompt tuning consistently outperforms fine-tuning in all three tasks. In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26% on average for code summarization. Our results suggest that instead of fine-tuning, we could adapt prompt tuning for code intelligence tasks to achieve better performance, especially when lacking task-specific data.
引用
收藏
页码:382 / 394
页数:13
相关论文
共 8 条
  • [1] Prompt Tuning in Code Intelligence: An Experimental Evaluation
    Wang, Chaozheng
    Yang, Yuanhang
    Gao, Cuiyun
    Peng, Yun
    Zhang, Hongyu
    Lyu, Michael R.
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2023, 49 (11) : 4869 - 4885
  • [2] Towards Efficient Fine-Tuning of Pre-trained Code Models: An Experimental Study and Beyond
    Shi, Ensheng
    Wang, Yanlin
    Zhang, Hongyu
    Du, Lun
    Han, Shi
    Zhang, Dongmei
    Sun, Hongbin
    PROCEEDINGS OF THE 32ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2023, 2023, : 39 - 51
  • [3] TCohPrompt: task-coherent prompt-oriented fine-tuning for relation extraction
    Long, Jun
    Yin, Zhuoying
    Liu, Chao
    Huang, Wenti
    COMPLEX & INTELLIGENT SYSTEMS, 2024, : 7565 - 7575
  • [4] Leveraging meta-data of code for adapting prompt tuning for code summarization
    Jiang, Zhihua
    Wang, Di
    Rao, Dongning
    APPLIED INTELLIGENCE, 2025, 55 (02)
  • [5] Leveraging meta-data of code for adapting prompt tuning for code summarizationLeveraging meta-data of code for adapting prompt tuning for code summaryZ. Jiang et al.
    Zhihua Jiang
    Di Wang
    Dongning Rao
    Applied Intelligence, 2025, 55 (3)
  • [6] Context-focused Prompt Tuning Pre-trained Code Models to Improve Code Summarization
    Pan, Xinglu
    Liu, Chenxiao
    Zou, Yanzhen
    Zhao, Xianlin
    Xie, Bing
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 1344 - 1349
  • [7] Improving prompt tuning-based software vulnerability assessment by fusing source code and vulnerability description
    Jiyu Wang
    Xiang Chen
    Wenlong Pei
    Shaoyu Yang
    Automated Software Engineering, 2025, 32 (2)
  • [8] Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained Ship Classification
    Lan, Long
    Wang, Fengxiang
    Zheng, Xiangtao
    Wang, Zengmao
    Liu, Xinwang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63