CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code

被引:0
|
作者
Zhou, Shuyan [1 ]
Alon, Uri [1 ,2 ]
Agarwal, Sumit [1 ]
Neubig, Graham [1 ]
机构
[1] Carnegie Mellon Univ, Language Technol Inst, Pittsburgh, PA 15213 USA
[2] Google DeepMind, London, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since the rise of neural natural-language-to-code models (NL -> Code) that can generate long expressions and statements rather than a single next-token, one of the major problems has been reliably evaluating their generated output. In this paper, we propose CodeBERTScore: an evaluation metric for code generation, which builds on BERTScore (Zhang et al., 2020). Instead of encoding only the generated tokens as in BERTScore, CodeBERTScore also encodes the natural language input preceding the generated code, thus modeling the consistency between the generated code and its given natural language context as well. We perform an extensive evaluation of CodeBERTScore across four programming languages. We find that CodeBERTScore achieves a higher correlation with human preference and with functional correctness than all existing metrics. That is, generated code that receives a higher score by CodeBERTScore is more likely to be preferred by humans, as well as to function correctly when executed. We release five language-specific pretrained models to use with our publicly available code. Our language-specific models have been downloaded more than 1,000,000 times from the Huggingface Hub.(1)
引用
收藏
页码:13921 / 13937
页数:17
相关论文
共 50 条
  • [21] Evaluating Code Comment Generation With Summarized API Docs
    Matmti, Bilel
    Fard, Fatemeh
    2023 IEEE/ACM 2ND INTERNATIONAL WORKSHOP ON NATURAL LANGUAGE-BASED SOFTWARE ENGINEERING, NLBSE, 2023, : 60 - 63
  • [22] Evaluating and optimising compiler code generation for NVIDIA Grace
    Jesus, Ricardo
    Weiland, Michele
    53RD INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2024, 2024, : 691 - 700
  • [23] SystemC code generation from UML models
    Baresi, L
    Bruschi, F
    Di Nitto, E
    Sciuto, D
    SYSTEM SPECIFICATION AND DESIGN LANGUAGES: BEST OF FDL '02, 2003, : 161 - 171
  • [24] ReCode: Robustness Evaluation of Code Generation Models
    Wang, Shiqi
    Li, Zheng
    Qian, Haifeng
    Yang, Chenghao
    Wang, Zijian
    Shang, Mingyue
    Kumar, Varun
    Tan, Samson
    Ray, Baishakhi
    Bhatia, Parminder
    Nallapati, Ramesh
    Ramanathan, Murali Krishna
    Roth, Dan
    Xiang, Bing
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 13818 - 13843
  • [25] Exploring Continual Learning for Code Generation Models
    Yadav, Prateek
    Sun, Qing
    Ding, Hantian
    Li, Xiaopeng
    Zhang, Dejiao
    Tan, Ming
    Ma, Xiaofei
    Bhatia, Parminder
    Nallapati, Ramesh
    Ramanathan, Murali Krishna
    Bansal, Mohit
    Xiang, Bing
    61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 782 - 792
  • [26] Consistent code generation from UML models
    Long, Q
    Liu, ZM
    Li, XS
    He, JF
    2005 Australian Software Engineering Conference, Proceedings, 2005, : 23 - 30
  • [27] Efficient code generation from SHIM models
    Edwards, Stephen A.
    Tardieu, Olivier
    ACM SIGPLAN NOTICES, 2006, 41 (07) : 125 - 134
  • [28] Comparing the Pretrained Models of Source Code by Re-pretraining Under a Unified Setup
    Niu, Changan
    Li, Chuanyi
    Ng, Vincent
    Luo, Bin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (12) : 1 - 11
  • [29] CodeT5+: Open Code Large Language Models for Code Understanding and Generation
    Wang, Yue
    Le, Hung
    Gotmare, Akhilesh Deepak
    Bui, Nghi D. Q.
    Li, Junnan
    Hoi, Steven C. H.
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 1069 - 1088
  • [30] GREEN-CODE: Optimizing Energy Efficiency in Large Language Models for Code Generation
    Ilager, Shashikant
    Briem, Lukas Florian
    Brandic, Ivona
    arXiv,