Multi-stage guided code generation for Large Language Models

被引:0
|
作者
Han, Yewei [1 ]
Lyu, Chen [1 ]
机构
[1] Shandong Normal Univ, Sch Informat Sci & Engn, Shandong Prov Key Lab Distributed Comp Software No, Univ Rd 1, Jinan, Peoples R China
关键词
Code generation; Multi-stage; Large Language Models; Prompt technique;
D O I
10.1016/j.engappai.2024.109491
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Currently, although Large Language Models (LLMs) have shown significant performance in the field of code generation, their effectiveness in handling complex programming tasks remains limited. This is primarily due to the substantial distance between the problem description and the correct code, making it difficult to ensure accuracy when directly generating code. Human programmers, when faced with a complex programming problem, usually use multiple stages to solve it in order to reduce the difficulty of development. First, they analyze the problem and think about a solution plan, then they design a code architecture based on that plan, and finally they finish writing the detailed code. Based on this, we propose a multi-stage guided code generation strategy that aims to gradually shorten the transformation distance between the problem description and the correct code, thus improving the accuracy of code generation. Specifically, the approach consists of three stages: planning, design and implementation. In the planning phase, the Large Language Model (LLM) generates a solution plan based on the problem description; in the design phase, the code architecture is further designed based on the solution plan; and in the implementation phase, the previous solution plan and code architecture are utilized to guide the LLM in generating the final code. Additionally, we found that existing competition-level code generation benchmarks may overlap with the training data of the Chat Generative Pre-trained Transformer (ChatGPT), posing a risk of data leakage. To validate the above findings and circumvent this risk, we created a competition-level code generation dataset named CodeC, which contains data never used for training ChatGPT. Experimental results show that our method outperforms the most advanced baselines. On the CodeC dataset, our approach achieves a 34.7% relative improvement on the Pass@1 metric compared to the direct generation method of ChatGPT. We have published the relevant dataset at https://github.com/hcode666/MSG for further academic research and validation.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Increasing Accessibility of Language Models with Multi-stage Information Extraction
    Czejdo, Conrad
    Bhattacharya, Sambit
    JOURNAL OF ADVANCES IN INFORMATION TECHNOLOGY, 2022, 13 (02) : 181 - 185
  • [2] A Comparative Analysis of Large Language Models for Code Documentation Generation
    Dvivedi, Shubhang Shekhar
    Vijay, Vyshnav
    Pujari, Sai Leela Rahul
    Lodh, Shoumik
    Kumar, Dhruv
    PROCEEDINGS OF THE 1ST ACM INTERNATIONAL CONFERENCE ON AI-POWERED SOFTWARE, AIWARE 2024, 2024, : 65 - 73
  • [3] BioCoder: a benchmark for bioinformatics code generation with large language models
    Tang, Xiangru
    Qian, Bill
    Gao, Rick
    Chen, Jiakang
    Chen, Xinyun
    Gerstein, Mark B.
    BIOINFORMATICS, 2024, 40 : i266 - i276
  • [4] LegalReasoner: A Multi-Stage Framework for Legal Judgment Prediction via Large Language Models and Knowledge Integration
    Wang, Xuran
    Zhang, Xinguang
    Hoo, Vanessa
    Shao, Zhouhang
    Zhang, Xuguang
    IEEE ACCESS, 2024, 12 : 166843 - 166854
  • [5] Knowledge-Aware Code Generation with Large Language Models
    Huang, Tao
    Sun, Zhihong
    Jin, Zhi
    Li, Ge
    Lyu, Chen
    PROCEEDINGS 2024 32ND IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, ICPC 2024, 2024, : 52 - 63
  • [6] Self-Planning Code Generation with Large Language Models
    Jiang, Xue
    Dong, Yihong
    Wang, Lecheng
    Fang, Zheng
    Shang, Qiwei
    Li, Ge
    Jin, Zhi
    Jiao, Wenpin
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (07)
  • [7] Framework for evaluating code generation ability of large language models
    Yeo, Sangyeop
    Ma, Yu-Seung
    Kim, Sang Cheol
    Jun, Hyungkook
    Kim, Taeho
    ETRI JOURNAL, 2024, 46 (01) : 106 - 117
  • [8] Large multi-stage OXC
    Lelic, I
    ECOC'01: 27TH EUROPEAN CONFERENCE ON OPTICAL COMMUNICATION, VOLS 1-6, 2001, : 540 - 541
  • [9] PassGPT: Password Modeling and (Guided) Generation with Large Language Models
    Rando, Javier
    Perez-Cruz, Fernando
    Hitaj, Briland
    COMPUTER SECURITY - ESORICS 2023, PT IV, 2024, 14347 : 164 - 183
  • [10] Using Large Language Models for Student-Code Guided Test Case Generation in Computer Science Education
    Kumar, Nischal Ashok
    Lan, Andrew S.
    arXiv, 1600,