CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code

被引:0
|
作者
Zhou, Shuyan [1 ]
Alon, Uri [1 ,2 ]
Agarwal, Sumit [1 ]
Neubig, Graham [1 ]
机构
[1] Carnegie Mellon Univ, Language Technol Inst, Pittsburgh, PA 15213 USA
[2] Google DeepMind, London, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since the rise of neural natural-language-to-code models (NL -> Code) that can generate long expressions and statements rather than a single next-token, one of the major problems has been reliably evaluating their generated output. In this paper, we propose CodeBERTScore: an evaluation metric for code generation, which builds on BERTScore (Zhang et al., 2020). Instead of encoding only the generated tokens as in BERTScore, CodeBERTScore also encodes the natural language input preceding the generated code, thus modeling the consistency between the generated code and its given natural language context as well. We perform an extensive evaluation of CodeBERTScore across four programming languages. We find that CodeBERTScore achieves a higher correlation with human preference and with functional correctness than all existing metrics. That is, generated code that receives a higher score by CodeBERTScore is more likely to be preferred by humans, as well as to function correctly when executed. We release five language-specific pretrained models to use with our publicly available code. Our language-specific models have been downloaded more than 1,000,000 times from the Huggingface Hub.(1)
引用
收藏
页码:13921 / 13937
页数:17
相关论文
共 50 条
  • [1] E-code: Mastering efficient code generation through pretrained models and expert encoder group
    Pan, Yue
    Lyu, Chen
    Yang, Zhenyu
    Li, Lantian
    Liu, Qi
    Shao, Xiuting
    INFORMATION AND SOFTWARE TECHNOLOGY, 2025, 178
  • [2] Exploring and Evaluating Personalized Models for Code Generation
    Zlotchevski, Andrei
    Drain, Dawn
    Svyatkovskiy, Alexey
    Clement, Colin B.
    Sundaresan, Neel
    Tufano, Michele
    PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1500 - 1508
  • [3] Evaluating Social Bias in Code Generation Models
    Ling, Lin
    COMPANION PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, FSE COMPANION 2024, 2024, : 695 - 697
  • [4] Exploring and Evaluating Personalized Models for Code Generation
    Zlotchevski, Andrei
    Drain, Dawn
    Svyatkovskiy, Alexey
    Clement, Colin
    Sundaresan, Neel
    Tufano, Michele
    arXiv, 2022,
  • [5] CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning
    Le, Hung
    Wang, Yue
    Gotmare, Akhilesh Deepak
    Savarese, Silvio
    Hoi, Steven C. H.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] CodeScore: Evaluating Code Generation by Learning Code Execution
    Dong, Yihong
    Ding, Jiazheng
    Jiang, Xue
    Li, Ge
    Li, Zhuo
    Jin, Zhi
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (03)
  • [7] Framework for evaluating code generation ability of large language models
    Yeo, Sangyeop
    Ma, Yu-Seung
    Kim, Sang Cheol
    Jun, Hyungkook
    Kim, Taeho
    ETRI JOURNAL, 2024, 46 (01) : 106 - 117
  • [8] Quantifying Contamination in Evaluating Code Generation Capabilities of Language Models
    Riddell, Martin
    Ni, Ansong
    Cohan, Arman
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 14116 - 14137
  • [9] ComplexCodeEval: A Benchmark for Evaluating Large Code Models on More Complex Code
    Feng, Jia
    Liu, Jiachen
    Gao, Cuiyun
    Chong, Chun Yong
    Wang, Chaozheng
    Gao, Shan
    Xia, Xin
    arXiv, 2024,
  • [10] ComplexCodeEval: A Benchmark for Evaluating Large Code Models on More Complex Code
    Feng, Jia
    Liu, Jiachen
    Gao, Cuiyun
    Chong, Chun Yong
    Wang, Chaozheng
    Gao, Shan
    Xia, Xin
    Proceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024, : 1895 - 1906