Prompt Tuning in Code Intelligence: An Experimental Evaluation

被引:6
|
作者
Wang, Chaozheng [1 ]
Yang, Yuanhang [1 ]
Gao, Cuiyun [1 ]
Peng, Yun [2 ]
Zhang, Hongyu [3 ,4 ]
Lyu, Michael R. [2 ]
机构
[1] Harbin Inst Technol, Shenzhen 518055, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong 999077, Peoples R China
[3] Univ Newcastle, Newcastle, Australia
[4] Chongqing Univ, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
Tuning; Codes; Task analysis; Training; Predictive models; Adaptation models; Source coding; Code intelligence; prompt tuning; empirical study;
D O I
10.1109/TSE.2023.3313881
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Pre-trained models have been shown effective in many code intelligence tasks, such as automatic code summarization and defect prediction. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream task data, while in practice, the data scarcity scenarios are common. Recent studies in the natural language processing (NLP) field show that prompt tuning, a new paradigm for tuning, alleviates the above issues and achieves promising results in various NLP tasks. In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data. In this article, we empirically evaluate the usage and effect of prompt tuning in code intelligence tasks. We conduct prompt tuning on popular pre-trained models CodeBERT and CodeT5 and experiment with four code intelligence tasks including defect prediction, code search, code summarization, and code translation. Our experimental results show that prompt tuning consistently outperforms fine-tuning in all four tasks. In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26% on average for code summarization. Our results suggest that instead of fine-tuning, we could adapt prompt tuning for code intelligence tasks to achieve better performance, especially when lacking task-specific data. We also discuss the implications for adapting prompt tuning in code intelligence tasks.
引用
收藏
页码:4869 / 4885
页数:17
相关论文
共 50 条
  • [31] Modality-Consistent Prompt Tuning With Optimal Transport
    Ren, Hairui
    Tang, Fan
    Zheng, Huangjie
    Zhao, He
    Guo, Dandan
    Chang, Yi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 2499 - 2512
  • [32] PTSTEP: Prompt Tuning for Semantic Typing of Event Processes
    Zhu, Wenhao
    Xu, Yongxiu
    Xu, Hongbo
    Tang, Minghao
    Zhu, Dongwei
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 541 - 553
  • [33] Judicial Text Relation Extraction Based on Prompt Tuning
    Chen, Xue
    Li, Yi
    Fan, Shuhuan
    Hou, Mengshu
    2024 2ND ASIA CONFERENCE ON COMPUTER VISION, IMAGE PROCESSING AND PATTERN RECOGNITION, CVIPPR 2024, 2024,
  • [34] Progressive Multi-modal Conditional Prompt Tuning
    Qiu, Xiaoyu
    Feng, Hao
    Wang, Yuechen
    Zhou, Wengang
    Li, Houqiang
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 46 - 54
  • [35] Adversarial Prompt Tuning for Vision-Language Models
    Zhang, Jiaming
    Ma, Xingjun
    Wang, Xin
    Qiu, Lingyu
    Wang, Jiaqi
    Jiang, Yu-Gang
    Sang, Jitao
    COMPUTER VISION - ECCV 2024, PT XLV, 2025, 15103 : 56 - 72
  • [36] LIPT: Improving Prompt Tuning with Late Inception Reparameterization
    He, Yawen
    Feng, Ao
    Gao, Zhengjie
    Song, Xinyu
    ELECTRONICS, 2024, 13 (23):
  • [37] Action-guided prompt tuning for video grounding
    Wang, Jing
    Tsao, Raymon
    Wang, Xuan
    Wang, Xiaojie
    Feng, Fangxiang
    Tian, Shiyu
    Poria, Soujanya
    INFORMATION FUSION, 2025, 113
  • [38] Clickbait Detection via Prompt-Tuning With Titles Only
    Wang, Ye
    Zhu, Yi
    Li, Yun
    Qiang, Jipeng
    Yuan, Yunhao
    Wu, Xindong
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 695 - 705
  • [39] PTCAS: Prompt tuning with continuous answer search for relation extraction
    Chen, Yang
    Shi, Bowen
    Xu, Ke
    INFORMATION SCIENCES, 2024, 659
  • [40] Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of fMRI Pre-trained Model
    Dong, Zijian
    Wu, Yilei
    Chen, Zijiao
    Zhang, Yichi
    Jin, Yueming
    Zhou, Juan Helen
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XI, 2024, 15011 : 512 - 521