Multi-stage guided code generation for Large Language Models

被引:1
作者
Han, Yewei [1 ]
Lyu, Chen [1 ]
机构
[1] Shandong Normal Univ, Sch Informat Sci & Engn, Shandong Prov Key Lab Distributed Comp Software No, Univ Rd 1, Jinan, Peoples R China
关键词
Code generation; Multi-stage; Large Language Models; Prompt technique;
D O I
10.1016/j.engappai.2024.109491
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Currently, although Large Language Models (LLMs) have shown significant performance in the field of code generation, their effectiveness in handling complex programming tasks remains limited. This is primarily due to the substantial distance between the problem description and the correct code, making it difficult to ensure accuracy when directly generating code. Human programmers, when faced with a complex programming problem, usually use multiple stages to solve it in order to reduce the difficulty of development. First, they analyze the problem and think about a solution plan, then they design a code architecture based on that plan, and finally they finish writing the detailed code. Based on this, we propose a multi-stage guided code generation strategy that aims to gradually shorten the transformation distance between the problem description and the correct code, thus improving the accuracy of code generation. Specifically, the approach consists of three stages: planning, design and implementation. In the planning phase, the Large Language Model (LLM) generates a solution plan based on the problem description; in the design phase, the code architecture is further designed based on the solution plan; and in the implementation phase, the previous solution plan and code architecture are utilized to guide the LLM in generating the final code. Additionally, we found that existing competition-level code generation benchmarks may overlap with the training data of the Chat Generative Pre-trained Transformer (ChatGPT), posing a risk of data leakage. To validate the above findings and circumvent this risk, we created a competition-level code generation dataset named CodeC, which contains data never used for training ChatGPT. Experimental results show that our method outperforms the most advanced baselines. On the CodeC dataset, our approach achieves a 34.7% relative improvement on the Pass@1 metric compared to the direct generation method of ChatGPT. We have published the relevant dataset at https://github.com/hcode666/MSG for further academic research and validation.
引用
收藏
页数:13
相关论文
共 35 条
[1]   The applications of machine learning techniques in medical data processing based on distributed computing and the Internet of Things [J].
Aminizadeh, Sarina ;
Heidari, Arash ;
Toumaj, Shiva ;
Darbandi, Mehdi ;
Navimipour, Nima Jafari ;
Rezaei, Mahsa ;
Talebi, Samira ;
Azad, Poupak ;
Unal, Mehmet .
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 241
[2]   Adventures in data analysis: a systematic review of Deep Learning techniques for pattern recognition in cyber-physical-social systems [J].
Amiri, Zahra ;
Heidari, Arash ;
Navimipour, Nima Jafari ;
Unal, Mehmet ;
Mousavi, Ali .
MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (8) :22909-22973
[3]  
Austin Jacob, 2021, arXiv
[4]  
Chen Mark, 2021, arXiv
[5]  
Fried D, 2023, Arxiv, DOI arXiv:2204.05999
[6]   Deepfake detection using deep learning methods: A systematic and comprehensive review [J].
Heidari, Arash ;
Navimipour, Nima Jafari ;
Dag, Hasan ;
Unal, Mehmet .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 14 (02)
[7]  
Heidari Arash, Internet Technology Letters, P530
[8]  
Hendrycks Dan, 2021, arXiv
[9]   Knowledge-Aware Code Generation with Large Language Models [J].
Huang, Tao ;
Sun, Zhihong ;
Jin, Zhi ;
Li, Ge ;
Lyu, Chen .
PROCEEDINGS 2024 32ND IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, ICPC 2024, 2024, :52-63
[10]   KareCoder: A New Knowledge-Enriched Code Generation System [J].
Huang, Tao ;
Sun, Zhihong ;
Jin, Zhi ;
Li, Ge ;
Lyu, Chen .
2024 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS, ICSE-COMPANION 2024, 2024, :270-271