Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation

被引:0
作者
Liu, Jiawei [1 ]
Xia, Chunqiu Steven [1 ]
Wang, Yuyao [2 ]
Zhang, Lingming [1 ]
机构
[1] Univ Illinois, Champaign, IL 61820 USA
[2] Nanjing Univ, Nanjing, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Program synthesis has been long studied with recent approaches focused on directly using the power of Large Language Models (LLMs) to generate code. Programming benchmarks, with curated synthesis problems and test-cases, are used to measure the performance of various LLMs on code synthesis. However, these test-cases can be limited in both quantity and quality for fully assessing the functional correctness of the generated code. Such limitation in the existing benchmarks begs the following question: In the era of LLMs, is the code generated really correct? To answer this, we propose EvalPlus - a code synthesis evaluation framework to rigorously benchmark the functional correctness of LLM-synthesized code. EvalPlus augments a given evaluation dataset with large amounts of test-cases newly produced by an automatic test input generator, powered by both LLM- and mutation-based strategies. While EvalPlus is general, we extend the test-cases of the popular HUMANEVAL benchmark by 80x to build HUMANEVAL(+). Our extensive evaluation across 26 popular LLMs (e.g., GPT-4 and ChatGPT) demonstrates that HUMANEVAL(+) is able to catch significant amounts of previously undetected wrong code synthesized by LLMs, reducing the pass@k by up-to 19.3-28.9%. We also surprisingly found that test insufficiency can lead to mis-ranking. For example, both WizardCoder-CodeLlama and Phind-CodeLlama now outperform ChatGPT on HUMANEVAL(+), while none of them could on HUMANEVAL. Our work not only indicates that prior popular code synthesis evaluation results do not accurately reflect the true performance of LLMs for code synthesis, but also opens up a new direction to improve such programming benchmarks through automated testing. We have open-sourced our tools, enhanced datasets as well as all LLM-generated code at https://github.com/evalplus/evalplus to facilitate and accelerate future LLM-for-code research.
引用
收藏
页数:15
相关论文
共 76 条
  • [1] AHMED T, 2022, 37 IEEEACM INT C AUT, P1
  • [2] Allal L. B., 2023, arXiv
  • [3] [Anonymous], 2011, P 38 ANN ACM SIGPLAN, DOI [DOI 10.1145/1926385.1926423, 10.1145/1926385.1926423, DOI 10.1145/1925844.1926423]
  • [4] Austin J., 2021, Program Synthesis with Large Language Models
  • [5] SMT-Based Translation Validation for Machine Learning Compiler
    Bang, Seongwon
    Nam, Seunghyeon
    Chun, Inwhan
    Jhoo, Ho Young
    Lee, Juneyoung
    [J]. COMPUTER AIDED VERIFICATION (CAV 2022), PT II, 2022, 13372 : 386 - 407
  • [6] Black S., 2021, GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow
  • [7] Budd TA, 1980, Mutation Analysis of Program Test Data
  • [8] Cadar Cristian, 2008, P 8 USENIX C OP SYST, P209
  • [9] Cassano Federico, 2023, IEEE Transactions on Software Engineering
  • [10] Program-Adaptive Mutational Fuzzing
    Cha, Sang Kil
    Woo, Maverick
    Brumley, David
    [J]. 2015 IEEE SYMPOSIUM ON SECURITY AND PRIVACY SP 2015, 2015, : 725 - 741