Leveraging Large Language Model for Automatic Patch Correctness Assessment

被引:0
|
作者
Zhou, Xin [1 ]
Xu, Bowen [2 ]
Kim, Kisub [1 ]
Han, DongGyun [3 ]
Nguyen, Hung Huu [1 ]
Le-Cong, Thanh [4 ]
He, Junda [1 ]
Le, Bach [4 ]
Lo, David [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore 188065, Singapore
[2] North Carolina State Univ, Dept Comp Sci Coll Engn, Raleigh, NC 27606 USA
[3] Royal Holloway Univ London, Dept Comp Sci, Egham TW20 0EX, England
[4] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic 3010, Australia
基金
新加坡国家研究基金会;
关键词
Computer bugs; Codes; Task analysis; Large language models; Feature extraction; Training; Manuals; Automatic patch correctness assessment; large language models of code; in-context learning;
D O I
10.1109/TSE.2024.3452252
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Automated Program Repair (APR) techniques have shown more and more promising results in fixing real-world bugs. Despite the effectiveness, APR techniques still face an overfitting problem: a generated patch can be incorrect although it passes all tests. It is time-consuming to manually evaluate the correctness of generated patches that can pass all available test cases. To address this problem, many approaches have been proposed to automatically assess the correctness of patches generated by APR techniques. These approaches are mainly evaluated within the cross-validation setting. However, for patches generated by a new or unseen APR tool, users are implicitly required to manually label a significant portion of these patches (e.g., 90% in 10-fold cross-validation) in the cross-validation setting before inferring the remaining patches (e.g., 10% in 10-fold cross-validation). To mitigate the issue, in this study, we propose LLM4PatchCorrect, the patch correctness assessment by adopting a large language model for code. Specifically, for patches generated by a new or unseen APR tool, LLM4PatchCorrect does not need labeled patches of this new or unseen APR tool for training but directly queries the large language model for code to get predictions on the correctness labels without training. In this way, LLM4PatchCorrect can reduce the manual labeling effort when building a model to automatically assess the correctness of generated patches of new APR tools. To provide knowledge regarding the automatic patch correctness assessment (APCA) task to the large language model for code, LLM4PatchCorrect leverages bug descriptions, execution traces, failing test cases, test coverage, and labeled patches generated by existing APR tools, before deciding the correctness of the unlabeled patches of a new or unseen APR tool. Additionally, LLM4PatchCorrect prioritizes labeled patches from existing APR tools that exhibit semantic similarity to those generated by new APR tools, enhancing the accuracy achieved by LLM4PatchCorrect for patches from new APR tools. Our experimental results showed that LLM4PatchCorrect can achieve an accuracy of 84.4% and an F1-score of 86.5% on average although no labeled patch of the new or unseen APR tool is available. In addition, our proposed technique significantly outperformed the prior state-of-the-art.
引用
收藏
页码:2865 / 2883
页数:19
相关论文
共 50 条
  • [1] On Reliability of Patch Correctness Assessment
    Le, Xuan-Bach D.
    Bao, Lingfeng
    Lo, David
    Xia, Xin
    Li, Shanping
    Pasareanu, Corina
    2019 IEEE/ACM 41ST INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2019), 2019, : 524 - 535
  • [2] Patch Correctness Assessment: A Survey
    Fei, Zhiwei
    Ge, Jidong
    Li, Chuanyi
    Wang, Tianqi
    Li, Yuning
    Zhang, Haodong
    Huang, Liguo
    Luo, Bin
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (02)
  • [3] Improving Patch Correctness Analysis via Random Testing and Large Language Models
    Molina, Facundo
    Manuel Copia, Juan
    Gorla, Alessandra
    2024 IEEE CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION, ICST 2024, 2024, : 317 - 328
  • [4] Leveraging Large Language Models for Automatic Smart Contract Generation
    Napoli, Emanuele Antonio
    Barbara, Fadi
    Gatteschi, Valentina
    Schifanella, Claudio
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 701 - 710
  • [5] NIRMAL: Automatic Identification of Software Relevant Tweets Leveraging Language Model
    Sharma, Abhishek
    Tian, Yuan
    Lo, David
    2015 22ND INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION, AND REENGINEERING (SANER), 2015, : 449 - 458
  • [6] Automated Patch Correctness Assessment: How Far are We?
    Wang, Shangwen
    Wen, Ming
    Lin, Bo
    Wu, Hongjun
    Qin, Yihao
    Zou, Deqing
    Mao, Xiaoguang
    Jin, Hai
    2020 35TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2020), 2020, : 968 - 980
  • [7] Assessment of Correctness, Content Omission, and Risk of Harm in Large Language Model Responses to Dermatology Continuing Medical Education Questions
    Cai, Zhuo Ran
    Chen, Michael L.
    Kim, Jiyeong
    Novoa, Roberto A.
    Barnes, Leandra A.
    Beam, Andrew
    Linos, Eleni
    JOURNAL OF INVESTIGATIVE DERMATOLOGY, 2024, 144 (08) : 1877 - 1879
  • [8] Leveraging a Large Learner Corpus for Automatic Suggestion of Collocations for Learners of Japanese as a Second Language
    Pereira, Lis
    Manguilimotan, Erlyn
    Matsumoto, Yuji
    CALICO JOURNAL, 2016, 33 (03): : 311 - 333
  • [9] Leveraging Large Language Models for Automatic Hypotheses Testing over Heterogeneous Biological Databases
    Jamil, Hasan M.
    Krawetz, Stephen
    Gow, Alexander
    14TH ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, BCB 2023, 2023,
  • [10] Automatic Model Selection with Large Language Models for Reasoning
    Zhao, James Xu
    Xie, Yuxi
    Kawaguchi, Kenji
    He, Junxian
    Xie, Michael Qizhe
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 758 - 783