Knowledge-Aware Code Generation with Large Language Models

被引:4
作者
Huang, Tao [1 ]
Sun, Zhihong [1 ]
Jin, Zhi [2 ]
Li, Ge [2 ]
Lyu, Chen [1 ]
机构
[1] Shandong Normal Univ, Sch Informat Sci & Engn, Jinan, Peoples R China
[2] Peking Univ, SCS, MOE, Key Lab HCST PKU, Beijing, Peoples R China
来源
PROCEEDINGS 2024 32ND IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, ICPC 2024 | 2024年
基金
中国国家自然科学基金;
关键词
Code Generation; Large Language Models; Knowledge Library;
D O I
10.1145/3643916.3644418
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Large Language Models (LLMs) perform well on basic programming problems. However, they encounter challenges when dealing with complex tasks involving the use of diverse algorithmic and data structure skills, particularly programming competition-level problems. Notably, ChatGPT exhibits proficient performance on problems it has encountered during its pre-training phase, but this performance deteriorates when faced with novel problems. Consequently, enhancing the ability of LLMs to address unfamiliar problems has emerged as a pivotal research focus. The problem-solving process of LLMs mirrors human programmers' approach to a certain extent. When confronted with new programming tasks, human programmers engage in task planning and code writing with the previously acquired knowledge about algorithms and data structures. Despite having learned such knowledge, LLMs struggle to effectively apply it when faced with specific new problems. To address this issue, we constructed a novel dataset, CodeF, which contains a portion of programming problems that ChatGPT has not previously encountered. Furthermore, we developed a Knowledge Library tailored for Python programming contest problems and introduced the concept of Knowledge-Aware Code Generation (KareCoder). KareCoder bolsters the models' understanding and problem-solving capabilities by integrating prompt and knowledge from the library into the LLMs' code generation reasoning process, especially on Pass@1 metrics. Upon testing on the CodeF and APPS datasets, KareCoder demonstrated outstanding performance in handling novel problems previously unencountered by LLMs. In contrast with the code directly generated by ChatGPT, KareCoder achieved a relative improvement of 23.3% on the Pass@1 metric on the CodeF post2021-9 dataset. Additionally, it performs well compared to other methods when dealing with problems that LLMs have previously encountered. Our dataset and experiment data are open-sourced and can be accessed at https://github.com/CodeGeneration3/KareCoder.
引用
收藏
页码:52 / 63
页数:12
相关论文
共 33 条
[21]   Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing [J].
Liu, Pengfei ;
Yuan, Weizhe ;
Fu, Jinlan ;
Jiang, Zhengbao ;
Hayashi, Hiroaki ;
Neubig, Graham .
ACM COMPUTING SURVEYS, 2023, 55 (09)
[22]   Embedding API dependency graph for neural code generation [J].
Lyu, Chen ;
Wang, Ruyun ;
Zhang, Hongyu ;
Zhang, Hanwen ;
Hu, Songlin .
EMPIRICAL SOFTWARE ENGINEERING, 2021, 26 (04)
[23]  
Nijkamp E, 2022, Arxiv, DOI [arXiv:2203.13474, DOI 10.48550/ARXIV.2203.13474]
[24]  
OpenAI, 2023, GPT-3.5
[25]  
OpenAI, 2023, ChatGPT
[26]  
Poesia Gabriel, 2021, INT C LEARN REPR
[27]  
Qiao SF, 2023, Arxiv, DOI [arXiv:2212.09597, DOI 10.48550/ARXIV.2212.09597]
[28]  
Roziere B, 2024, Arxiv, DOI arXiv:2308.12950
[29]   Incorporating Domain Knowledge through Task Augmentation for Front-End Java']JavaScript Code Generation [J].
Shen, Sijie ;
Zhu, Xiang ;
Dong, Yihong ;
Guo, Qizhi ;
Zhen, Yankun ;
Li, Ge .
PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, :1533-1543
[30]  
Sun ZY, 2020, AAAI CONF ARTIF INTE, V34, P8984