共 27 条
[1]
Ahmed T, 2024, Arxiv, DOI arXiv:2304.06815
[2]
Few-shot training LLMs for project-specific code-summarization
[J].
PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022,
2022,
[3]
Bareiss P, 2022, Arxiv, DOI [arXiv:2206.01335, 10.48550/arXiv.2206.01335, DOI 10.48550/ARXIV.2206.01335]
[4]
Berabi B, 2021, PR MACH LEARN RES, V139
[5]
Brown TB, 2020, ADV NEUR IN, V33
[6]
NatGen: Generative Pre-training by "Naturalizing" Source Code
[J].
PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022,
2022,
:18-30
[7]
On Multi-Modal Learning of Editing Source Code
[J].
2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021,
2021,
:443-455
[8]
Chen M., 2021, Evaluating large language models trained on code
[10]
Chowdhery A, 2022, Arxiv, DOI arXiv:2204.02311