Prompt Tuning in Code Intelligence: An Experimental Evaluation

被引:6
|
作者
Wang, Chaozheng [1 ]
Yang, Yuanhang [1 ]
Gao, Cuiyun [1 ]
Peng, Yun [2 ]
Zhang, Hongyu [3 ,4 ]
Lyu, Michael R. [2 ]
机构
[1] Harbin Inst Technol, Shenzhen 518055, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong 999077, Peoples R China
[3] Univ Newcastle, Newcastle, Australia
[4] Chongqing Univ, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
Tuning; Codes; Task analysis; Training; Predictive models; Adaptation models; Source coding; Code intelligence; prompt tuning; empirical study;
D O I
10.1109/TSE.2023.3313881
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Pre-trained models have been shown effective in many code intelligence tasks, such as automatic code summarization and defect prediction. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream task data, while in practice, the data scarcity scenarios are common. Recent studies in the natural language processing (NLP) field show that prompt tuning, a new paradigm for tuning, alleviates the above issues and achieves promising results in various NLP tasks. In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data. In this article, we empirically evaluate the usage and effect of prompt tuning in code intelligence tasks. We conduct prompt tuning on popular pre-trained models CodeBERT and CodeT5 and experiment with four code intelligence tasks including defect prediction, code search, code summarization, and code translation. Our experimental results show that prompt tuning consistently outperforms fine-tuning in all four tasks. In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26% on average for code summarization. Our results suggest that instead of fine-tuning, we could adapt prompt tuning for code intelligence tasks to achieve better performance, especially when lacking task-specific data. We also discuss the implications for adapting prompt tuning in code intelligence tasks.
引用
收藏
页码:4869 / 4885
页数:17
相关论文
共 50 条
  • [41] KEPT: Knowledge Enhanced Prompt Tuning for event causality identification
    Liu, Jintao
    Zhang, Zequn
    Guo, Zhi
    Jin, Li
    Li, Xiaoyu
    Wei, Kaiwen
    Sun, Xian
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [42] Short text classification with Soft Knowledgeable Prompt-tuning
    Zhu, Yi
    Wang, Ye
    Mu, Jianyuan
    Li, Yun
    Qiang, Jipeng
    Yuan, Yunhao
    Wu, Xindong
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 246
  • [43] Filling in the Blank: Rationale-Augmented Prompt Tuning for TextVQA
    Zeng, Gangyan
    Zhang, Yuan
    Zhou, Yu
    Fang, Bo
    Zhao, Guoqing
    Wei, Xin
    Wang, Weiping
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 1261 - 1272
  • [44] Ontology-based prompt tuning for news article summarization
    Silva, A. R. S.
    Priyadarshana, Y. H. P. P.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2025, 8
  • [45] COPHTC: CONTRASTIVE LEARNING WITH PROMPT TUNING FOR HIERARCHICAL TEXT CLASSIFICATION
    Cai, Fuhan
    Zhang, Zhongqiang
    Liu, Duo
    Fang, Xiangzhong
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 5400 - 5404
  • [46] Prompt tuning for parameter-efficient medical image segmentation
    Fischer, Marc
    Bartler, Alexander
    Yang, Bin
    MEDICAL IMAGE ANALYSIS, 2024, 91
  • [47] VPN: Variation on Prompt Tuning for Named-Entity Recognition
    Hu, Niu
    Zhou, Xuan
    Xu, Bing
    Liu, Hanqing
    Xie, Xiangjin
    Zheng, Hai-Tao
    APPLIED SCIENCES-BASEL, 2023, 13 (14):
  • [48] Data Augmentation by Prompt Tuning on Natural Language Understanding Tasks
    Wang, Yu-Hao
    Chang, Chia-Ming
    Tsai, Yi-Hang
    Hwang, San-Yih
    2024 11TH INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-TAIWAN, ICCE-TAIWAN 2024, 2024, : 807 - 808
  • [49] POET: Prompt Offset Tuning for Continual Human Action Adaptation
    Garg, Prachi
    Joseph, K. J.
    Balasubramanian, Vineeth N.
    Camgoz, Necati Cihan
    Wan, Chengde
    King, Kenrick
    Si, Weiguang
    Ma, Shugao
    De La Torre, Fernando
    COMPUTER VISION - ECCV 2024, PT LXIV, 2025, 15122 : 436 - 455
  • [50] Model tuning or prompt Tuning? a study of large language models for clinical concept and relation extraction
    Peng, Cheng
    Yang, Xi
    Smith, Kaleb E.
    Yu, Zehao
    Chen, Aokun
    Bian, Jiang
    Wu, Yonghui
    JOURNAL OF BIOMEDICAL INFORMATICS, 2024, 153