Leveraging Large Language Model for Automatic Patch Correctness Assessment

被引:0
|
作者
Zhou, Xin [1 ]
Xu, Bowen [2 ]
Kim, Kisub [1 ]
Han, DongGyun [3 ]
Nguyen, Hung Huu [1 ]
Le-Cong, Thanh [4 ]
He, Junda [1 ]
Le, Bach [4 ]
Lo, David [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore 188065, Singapore
[2] North Carolina State Univ, Dept Comp Sci Coll Engn, Raleigh, NC 27606 USA
[3] Royal Holloway Univ London, Dept Comp Sci, Egham TW20 0EX, England
[4] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic 3010, Australia
基金
新加坡国家研究基金会;
关键词
Computer bugs; Codes; Task analysis; Large language models; Feature extraction; Training; Manuals; Automatic patch correctness assessment; large language models of code; in-context learning;
D O I
10.1109/TSE.2024.3452252
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Automated Program Repair (APR) techniques have shown more and more promising results in fixing real-world bugs. Despite the effectiveness, APR techniques still face an overfitting problem: a generated patch can be incorrect although it passes all tests. It is time-consuming to manually evaluate the correctness of generated patches that can pass all available test cases. To address this problem, many approaches have been proposed to automatically assess the correctness of patches generated by APR techniques. These approaches are mainly evaluated within the cross-validation setting. However, for patches generated by a new or unseen APR tool, users are implicitly required to manually label a significant portion of these patches (e.g., 90% in 10-fold cross-validation) in the cross-validation setting before inferring the remaining patches (e.g., 10% in 10-fold cross-validation). To mitigate the issue, in this study, we propose LLM4PatchCorrect, the patch correctness assessment by adopting a large language model for code. Specifically, for patches generated by a new or unseen APR tool, LLM4PatchCorrect does not need labeled patches of this new or unseen APR tool for training but directly queries the large language model for code to get predictions on the correctness labels without training. In this way, LLM4PatchCorrect can reduce the manual labeling effort when building a model to automatically assess the correctness of generated patches of new APR tools. To provide knowledge regarding the automatic patch correctness assessment (APCA) task to the large language model for code, LLM4PatchCorrect leverages bug descriptions, execution traces, failing test cases, test coverage, and labeled patches generated by existing APR tools, before deciding the correctness of the unlabeled patches of a new or unseen APR tool. Additionally, LLM4PatchCorrect prioritizes labeled patches from existing APR tools that exhibit semantic similarity to those generated by new APR tools, enhancing the accuracy achieved by LLM4PatchCorrect for patches from new APR tools. Our experimental results showed that LLM4PatchCorrect can achieve an accuracy of 84.4% and an F1-score of 86.5% on average although no labeled patch of the new or unseen APR tool is available. In addition, our proposed technique significantly outperformed the prior state-of-the-art.
引用
收藏
页码:2865 / 2883
页数:19
相关论文
共 50 条
  • [21] D LLM-in-the-loop: Leveraging Large Language Model for Thematic Analysis
    Dai, Shih-Chieh
    Xiong, Aiping
    Ku, Lun-Wei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9993 - 10001
  • [22] Automatic readability assessment for sentences: neural, hybrid and large language models
    Liu, Fengkai
    Jin, Tan
    Lee, John S. Y.
    LANGUAGE RESOURCES AND EVALUATION, 2025,
  • [23] Invalidator: Automated Patch Correctness Assessment Via Semantic and Syntactic Reasoning
    Le-Cong, Thanh
    Luong, Duc-Minh
    Le, Xuan Bach D.
    Lo, David
    Tran, Nhat-Hoa
    Quang-Huy, Bui
    Huynh, Quyet-Thang
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2023, 49 (06) : 3411 - 3429
  • [24] Automatic correctness checking of foreign language expressions using web search
    Liu, H. (liuh@shift.edu.cn), 1600, ICIC Express Letters Office, Tokai University, Kumamoto Campus, 9-1-1, Toroku, Kumamoto, 862-8652, Japan (03):
  • [25] A hybrid model for extractive summarization: Leveraging graph entropy to improve large language model performance
    Uckan, Taner
    AIN SHAMS ENGINEERING JOURNAL, 2025, 16 (05)
  • [26] Leveraging large language models for predictive chemistry
    Kevin Maik Jablonka
    Philippe Schwaller
    Andres Ortega-Guerrero
    Berend Smit
    Nature Machine Intelligence, 2024, 6 : 161 - 169
  • [27] Leveraging Large Language Models for Tradespace Exploration
    Apaza, Gabriel
    Selva, Daniel
    JOURNAL OF SPACECRAFT AND ROCKETS, 2024, 61 (05) : 1165 - 1183
  • [28] Leveraging Large Language Models for Sequential Recommendation
    Harte, Jesse
    Zorgdrager, Wouter
    Louridas, Panos
    Katsifodimos, Asterios
    Jannach, Dietmar
    Fragkoulis, Marios
    PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 1096 - 1102
  • [29] Leveraging large language models for predictive chemistry
    Jablonka, Kevin Maik
    Schwaller, Philippe
    Ortega-Guerrero, Andres
    Smit, Berend
    NATURE MACHINE INTELLIGENCE, 2024, 6 (02) : 122 - 123
  • [30] A novel forecasting framework leveraging large language model and machine learning for methanol price
    Wang, Wenyang
    Luo, Yuping
    Ma, Mingrui
    Wang, Jinglin
    Sui, Cong
    ENERGY, 2025, 320