GPT-3-Powered Type Error Debugging: Investigating the Use of Large Language Models for Code Repair

被引:11
|
作者
Ribeiro, Francisco [1 ]
Castro de Macedo, Jose Nuno [1 ]
Tsushima, Kanae [2 ]
Abreu, Rui [3 ]
Saraiva, Joao [1 ]
机构
[1] Univ Minho, HASLab, INESC TEC, Braga, Portugal
[2] Sokendai Univ, Natl Inst Informat, Tokyo, Japan
[3] Univ Porto, INESC ID, Porto, Portugal
关键词
Automated Program Repair; GPT-3; Fault Localization; Code Generation;
D O I
10.1145/3623476.3623522
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Type systems are responsible for assigning types to terms in programs. That way, they enforce the actions that can be taken and can, consequently, detect type errors during compilation. However, while they are able to flag the existence of an error, they often fail to pinpoint its cause or provide a helpful error message. Thus, without adequate support, debugging this kind of errors can take a considerable amount of effort. Recently, neural network models have been developed that are able to understand programming languages and perform several downstream tasks. We argue that type error debugging can be enhanced by taking advantage of this deeper understanding of the language's structure. In this paper, we present a technique that leverages GPT-3's capabilities to automatically fix type errors in OCaml programs. We perform multiple source code analysis tasks to produce useful prompts that are then provided to GPT-3 to generate potential patches. Our publicly available tool, Mentat, supports multiple modes and was validated on an existing public dataset with thousands of OCaml programs. We automatically validate successful repairs by using Quickcheck to verify which generated patches produce the same output as the user-intended fixed version, achieving a 39% repair rate. In a comparative study, Mentat outperformed two other techniques in automatically fixing ill-typed OCaml programs.
引用
收藏
页码:111 / 124
页数:14
相关论文
共 33 条
  • [31] Comparative analysis of large language models in psychiatry and mental health: A focus on GPT, AYA, and Nemotron-3-8B - 8B
    Gargari, Omid Kohandel
    Habibi, Gholamreza
    Nilchian, Nima
    Farzan, Arman Shafiee
    ASIAN JOURNAL OF PSYCHIATRY, 2024, 99
  • [32] Human-Comparable Sensitivity of Large Language Models inIdenti fying Eligible Studies Through Title and Abstract Screening:3-Layer Strategy Using GPT-3.5 and GPT-4 for Systematic Reviews
    Matsui, Kentaro
    Utsumi, Tomohiro
    Aoki, Yumi
    Maruki, Taku
    Takeshima, Masahiro
    Takaesu, Yoshikazu
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26
  • [33] Comparative diagnostic accuracy of GPT-4o and LLaMA 3-70b: Proprietary vs. open-source large language models in radiology☆
    Li, David
    Gupta, Kartik
    Bhaduri, Mousumi
    Sathiadoss, Paul
    Bhatnagar, Sahir
    Chong, Jaron
    CLINICAL IMAGING, 2025, 118