On the Evaluation of Large Language Models in Unit Test Generation

被引:3
作者
Yang, Lin [1 ]
Yang, Chen [1 ]
Gao, Shutao [2 ]
Wang, Weijing [1 ]
Wang, Bo [3 ]
Zhu, Qihao [4 ]
Chu, Xiao [5 ]
Zhou, Jianyi [5 ]
Liang, Guangtai [5 ]
Wang, Qianxiang [5 ]
Chen, Junjie [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin, Peoples R China
[2] Tianjin Univ, Sch Future Technol, Tianjin, Peoples R China
[3] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing, Peoples R China
[4] Peking Univ, Key Lab HCST, MoE DCST, Beijing, Peoples R China
[5] Huawei Cloud Comp Co Ltd, Hangzhou, Peoples R China
来源
PROCEEDINGS OF 2024 39TH ACM/IEEE INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2024 | 2024年
基金
中国国家自然科学基金;
关键词
Large Language Model; Unit Test Generation; Empirical Study;
D O I
10.1145/3691620.3695529
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unit testing is an essential activity in software development for verifying the correctness of software components. However, manually writing unit tests is challenging and time-consuming. The emergence of Large Language Models (LLMs) offers a new direction for automating unit test generation. Existing research primarily focuses on closed-source LLMs (e.g., ChatGPT and CodeX) with fixed prompting strategies, leaving the capabilities of advanced open-source LLMs with various prompting settings unexplored. Particularly, open-source LLMs offer advantages in data privacy protection and have demonstrated superior performance in some tasks. Moreover, effective prompting is crucial for maximizing LLMs' capabilities. In this paper, we conduct the first empirical study to fill this gap, based on 17 Java projects, five widely-used open-source LLMs with different structures and parameter sizes, and comprehensive evaluation metrics. Our findings highlight the significant influence of various prompt factors, show the performance of open-source LLMs compared to the commercial GPT-4 and the traditional Evosuite, and identify limitations in LLM-based unit test generation. We then derive a series of implications from our study to guide future research and practical use of LLM-based unit test generation.
引用
收藏
页码:1607 / 1619
页数:13
相关论文
共 93 条
[21]  
Hoffmann Marc, 2021, JaCoCo: Java Code Coverage Library
[22]  
Holtzman Ari, 2020, INT C LEARNING REPRE
[23]  
Hou Xinyi, 2023, abs/2308.10620
[24]   Variable-based Fault Localization via Enhanced Decision Tree [J].
Jiang, Jiajun ;
Wang, Yumeng ;
Chen, Junjie ;
Lv, Delin ;
Liu, Mengjiao .
ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (02)
[25]   Combining Spectrum-Based Fault Localization and Statistical Debugging: An Empirical Study [J].
Jiang, Jiajun ;
Wang, Ran ;
Xiong, Yingfei ;
Chen, Xiangping ;
Zhang, Lu .
34TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2019), 2019, :502-514
[26]   A manual inspection of Defects4J bugs and its implications for automatic program repair [J].
Jiang, Jiajun ;
Xiong, Yingfei ;
Xia, Xin .
SCIENCE CHINA-INFORMATION SCIENCES, 2019, 62 (10)
[27]   Shaping Program Repair Space with Existing Patches and Similar Code [J].
Jiang, Jiajun ;
Xiong, Yingfei ;
Zhang, Hongyu ;
Gao, Qing ;
Chen, Xiangqun .
ISSTA'18: PROCEEDINGS OF THE 27TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2018, :298-309
[28]   Evaluating Fault Localization and Program Repair Capabilities of Existing Closed-Source General-Purpose LLMs [J].
Jiang, Shengbei ;
Zhang, Jiabao ;
Chen, Wei ;
Wang, Bo ;
Zhou, Jianyi ;
Zhang, Jie .
2024 INTERNATIONAL WORKSHOP ON LARGE LANGUAGE MODELS FOR CODE, LLM4CODE 2024, 2024, :75-78
[29]  
Jun H., 2021, arXiv
[30]  
Just R., 2014, P 2014 INT S SOFTW T, P437, DOI DOI 10.1145/2610384.2628055