Prompt Tuning in Code Intelligence: An Experimental Evaluation

被引:6
|
作者
Wang, Chaozheng [1 ]
Yang, Yuanhang [1 ]
Gao, Cuiyun [1 ]
Peng, Yun [2 ]
Zhang, Hongyu [3 ,4 ]
Lyu, Michael R. [2 ]
机构
[1] Harbin Inst Technol, Shenzhen 518055, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong 999077, Peoples R China
[3] Univ Newcastle, Newcastle, Australia
[4] Chongqing Univ, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
Tuning; Codes; Task analysis; Training; Predictive models; Adaptation models; Source coding; Code intelligence; prompt tuning; empirical study;
D O I
10.1109/TSE.2023.3313881
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Pre-trained models have been shown effective in many code intelligence tasks, such as automatic code summarization and defect prediction. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream task data, while in practice, the data scarcity scenarios are common. Recent studies in the natural language processing (NLP) field show that prompt tuning, a new paradigm for tuning, alleviates the above issues and achieves promising results in various NLP tasks. In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data. In this article, we empirically evaluate the usage and effect of prompt tuning in code intelligence tasks. We conduct prompt tuning on popular pre-trained models CodeBERT and CodeT5 and experiment with four code intelligence tasks including defect prediction, code search, code summarization, and code translation. Our experimental results show that prompt tuning consistently outperforms fine-tuning in all four tasks. In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26% on average for code summarization. Our results suggest that instead of fine-tuning, we could adapt prompt tuning for code intelligence tasks to achieve better performance, especially when lacking task-specific data. We also discuss the implications for adapting prompt tuning in code intelligence tasks.
引用
收藏
页码:4869 / 4885
页数:17
相关论文
共 50 条
  • [1] No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence
    Wang, Chaozheng
    Yang, Yuanhang
    Gao, Cuiyun
    Peng, Yun
    Zhang, Hongyu
    Lyu, Michael R.
    PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 382 - 394
  • [2] When Adversarial Training Meets Prompt Tuning: Adversarial Dual Prompt Tuning for Unsupervised Domain Adaptation
    Cui, Chaoran
    Liu, Ziyi
    Gong, Shuai
    Zhu, Lei
    Zhang, Chunyun
    Liu, Hui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1427 - 1440
  • [3] Leveraging meta-data of code for adapting prompt tuning for code summarizationLeveraging meta-data of code for adapting prompt tuning for code summaryZ. Jiang et al.
    Zhihua Jiang
    Di Wang
    Dongning Rao
    Applied Intelligence, 2025, 55 (3)
  • [4] Leveraging meta-data of code for adapting prompt tuning for code summarization
    Jiang, Zhihua
    Wang, Di
    Rao, Dongning
    APPLIED INTELLIGENCE, 2025, 55 (02)
  • [5] Pro-Tuning: Unified Prompt Tuning for Vision Tasks
    Nie, Xing
    Ni, Bolin
    Chang, Jianlong
    Meng, Gaofeng
    Huo, Chunlei
    Xiang, Shiming
    Tian, Qi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4653 - 4667
  • [6] Domain Prompt Tuning via Meta Relabeling for Unsupervised Adversarial Adaptation
    Jin, Xin
    Lan, Cuiling
    Zeng, Wenjun
    Chen, Zhibo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 8333 - 8347
  • [7] Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model
    Xing, Yinghui
    Wu, Qirui
    Cheng, De
    Zhang, Shizhou
    Liang, Guoqiang
    Wang, Peng
    Zhang, Yanning
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2056 - 2068
  • [8] Iterative Soft Prompt-Tuning for Unsupervised Domain Adaptation
    Zhu, Yi
    Wang, Shuqin
    Qiang, Jipeng
    Wu, Xindong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 8580 - 8592
  • [9] Prompt Tuning of Deep Neural Networks for Speaker-Adaptive Visual Speech Recognition
    Kim, Minsu
    Kim, Hyung-Il
    Ro, Yong Man
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (02) : 1042 - 1055
  • [10] Black-Box Prompt Tuning With Subspace Learning
    Zheng, Yuanhang
    Tan, Zhixing
    Li, Peng
    Liu, Yang
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 3002 - 3013