Impact of Code Language Models on Automated Program Repair

被引:38
作者
Jiang, Nan [1 ]
Liu, Kevin [2 ]
Lutellier, Thibaud [3 ]
Tan, Lin [1 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
[2] Lynbrook High Sch, San Jose, CA USA
[3] Univ Alberta, Edmonton, AB, Canada
来源
2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE | 2023年
关键词
Automated Program Repair; Code Language Model; Fine-Tuning; Deep Learning;
D O I
10.1109/ICSE48619.2023.00125
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Automated program repair (APR) aims to help developers improve software reliability by generating patches for buggy programs. Although many code language models (CLM) are developed and effective in many software tasks such as code completion, there has been little comprehensive, in-depth work to evaluate CLMs' fixing capabilities and to fine-tune CLMs for the APR task. Firstly, this work is the first to evaluate ten CLMs on four APR benchmarks, which shows that surprisingly, the best CLM, as is, fixes 72% more bugs than the state-of-the-art deep-learning (DL)-based APR techniques. Secondly, one of the four APR benchmarks was created by us in this paper to avoid data leaking for a fair evaluation. Thirdly, it is the first work to fine-tune CLMs with APR training data, which shows that finetuning brings 31%-1,267% improvement to CLMs and enables them to fix 46%-164% more bugs than existing DL-based APR techniques. Fourthly, this work studies the impact of buggy lines, showing that CLMs, as is, cannot make good use of the buggy lines to fix bugs, yet fine-tuned CLMs could potentially over-rely on buggy lines. Lastly, this work analyzes the size, time, and memory efficiency of different CLMs. This work shows promising directions for the APR domain, such as fine-tuning CLMs with APR-specific designs, and also raises awareness of fair and comprehensive evaluations of CLMs and calls for more transparent reporting of open-source repositories used in the pre-training data to address the data leaking problem.
引用
收藏
页码:1430 / 1442
页数:13
相关论文
共 62 条
  • [1] Ahmad WU, 2021, 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), P2655
  • [2] Lee ESA, 2022, Arxiv, DOI arXiv:2203.08850
  • [3] [Anonymous], 2017, PeerJ PrePrints
  • [4] Brown TB, 2020, Arxiv, DOI arXiv:2005.14165
  • [5] Do We Train on Test Data? Purging CIFAR of Near-Duplicates
    Barz, Bjoern
    Denzler, Joachim
    [J]. JOURNAL OF IMAGING, 2020, 6 (06)
  • [6] Black Sid, 2021, Zenodo
  • [7] CODIT: Code Editing With Tree-Based Neural Models
    Chakraborty, Saikat
    Ding, Yangruibo
    Allamanis, Miltiadis
    Ray, Baishakhi
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2022, 48 (04) : 1385 - 1399
  • [8] Chen Mark, 2021, arXiv, DOI DOI 10.48550/ARXIV.2107.03374
  • [9] Chen Z., 2021, arXiv
  • [10] Chen Z., 2019, Sequencer: Sequence-to-sequence learning for end-to-end program repair