CAT-LM Training Language Models on Aligned Code And Tests

被引:17
作者
Rao, Nikitha [1 ]
Jain, Kush [1 ]
Alon, Uri [1 ]
Le Goues, Claire [1 ]
Hellendoorn, Vincent J. [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE | 2023年
基金
美国国家科学基金会;
关键词
test generation; test completion; large language models; code-test alignment;
D O I
10.1109/ASE56229.2023.00193
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Testing is an integral but often neglected part of the software development process. Classical test generation tools such as EvoSuite generate behavioral test suites by optimizing for coverage, but tend to produce tests that are hard to understand. Language models trained on code can generate code that is highly similar to that written by humans, but current models are trained to generate each file separately, as is standard practice in natural language processing, and thus fail to consider the code-under-test context when producing a test file. In this work, we propose the Aligned Code And Tests Language Model (CAT-LM), a GPT-style language model with 2.7 Billion parameters, trained on a corpus of Python and Java projects. We utilize a novel pretraining signal that explicitly considers the mapping between code and test files when available. We also drastically increase the maximum sequence length of inputs to 8,192 tokens, 4x more than typical code generation models, to ensure that the code context is available to the model when generating test code. We analyze its usefulness for realistic applications, showing that sampling with filtering (e.g., by compilability, coverage) allows it to efficiently produce tests that achieve coverage similar to ones written by developers while resembling their writing style. By utilizing the code context, CAT-LM generates more valid tests than even much larger language models trained with more data (CodeGen 16B and StarCoder) and substantially outperforms a recent test-specific model (TeCo) at test completion. Overall, our work highlights the importance of incorporating software-specific insights when training language models for code and paves the way to more powerful automated test generation.
引用
收藏
页码:409 / 420
页数:12
相关论文
共 50 条
[1]   The Adverse Effects of Code Duplication in Machine Learning Models of Code [J].
Allamams, Miltiadis .
PROCEEDINGS OF THE 2019 ACM SIGPLAN INTERNATIONAL SYMPOSIUM ON NEW IDEAS, NEW PARADIGMS, AND REFLECTIONS ON PROGRAMMING AND SOFTWARE (ONWARD!' 19), 2019, :143-153
[2]  
[Anonymous], 2011, P 19 ACM SIGSOFT S 1, DOI [10.1145/2025113.2025179, DOI 10.1145/2025113.2025179]
[3]  
[Anonymous], 2021, GitHub Copilot
[4]   A Survey of Symbolic Execution Techniques [J].
Baldoni, Roberto ;
Coppa, Emilio ;
D'Elia, Daniele Cono ;
Demetrescu, Camil ;
Finocchi, Irene .
ACM COMPUTING SURVEYS, 2018, 51 (03) :1-39
[5]  
Bavarian M, 2022, Arxiv, DOI [arXiv:2207.14255, 10.48550/arXiv.2207.14255]
[6]   When, How, and Why Developers (Do Not) Test in Their IDEs [J].
Beller, Moritz ;
Gousios, Georgios ;
Panichella, Annibale ;
Zaidman, Andy .
2015 10TH JOINT MEETING OF THE EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND THE ACM SIGSOFT SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE 2015) PROCEEDINGS, 2015, :179-190
[7]   How (Much) Do Developers Test? [J].
Beller, Moritz ;
Gousios, Georgios ;
Zaidman, Andy .
2015 IEEE/ACM 37TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, VOL 2, 2015, :559-562
[8]  
Boyapati C., 2002, Software Engineering Notes, V27, P123, DOI 10.1145/566171.566191
[9]   Developer-centric test amplification The interplay between automatic generation human exploration [J].
Brandt, Carolin ;
Zaidman, Andy .
EMPIRICAL SOFTWARE ENGINEERING, 2022, 27 (04)
[10]  
Brown TB, 2020, ADV NEUR IN, V33