Exploring and Evaluating Personalized Models for Code Generation

被引:0
|
作者
Zlotchevski, Andrei [1 ]
Drain, Dawn [2 ]
Svyatkovskiy, Alexey [3 ]
Clement, Colin [3 ]
Sundaresan, Neel [3 ]
Tufano, Michele [3 ]
机构
[1] McGill University, Montreal,QC, Canada
[2] Anthropic, San Francisco,CA, United States
[3] Microsoft, Redmond,WA, United States
来源
arXiv | 2022年
关键词
Compilation and indexing terms; Copyright 2024 Elsevier Inc;
D O I
暂无
中图分类号
学科分类号
摘要
Learning systems - Modeling languages - Natural language processing systems - Scattering parameters - Software testing
引用
收藏
相关论文
共 50 条
  • [1] Exploring and Evaluating Personalized Models for Code Generation
    Zlotchevski, Andrei
    Drain, Dawn
    Svyatkovskiy, Alexey
    Clement, Colin B.
    Sundaresan, Neel
    Tufano, Michele
    PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1500 - 1508
  • [2] CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code
    Zhou, Shuyan
    Alon, Uri
    Agarwal, Sumit
    Neubig, Graham
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 13921 - 13937
  • [3] Evaluating Social Bias in Code Generation Models
    Ling, Lin
    COMPANION PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, FSE COMPANION 2024, 2024, : 695 - 697
  • [4] Exploring Continual Learning for Code Generation Models
    Yadav, Prateek
    Sun, Qing
    Ding, Hantian
    Li, Xiaopeng
    Zhang, Dejiao
    Tan, Ming
    Ma, Xiaofei
    Bhatia, Parminder
    Nallapati, Ramesh
    Ramanathan, Murali Krishna
    Bansal, Mohit
    Xiang, Bing
    61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 782 - 792
  • [5] Framework for evaluating code generation ability of large language models
    Yeo, Sangyeop
    Ma, Yu-Seung
    Kim, Sang Cheol
    Jun, Hyungkook
    Kim, Taeho
    ETRI JOURNAL, 2024, 46 (01) : 106 - 117
  • [6] Quantifying Contamination in Evaluating Code Generation Capabilities of Language Models
    Riddell, Martin
    Ni, Ansong
    Cohan, Arman
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 14116 - 14137
  • [7] Invited Paper: VerilogEval: Evaluating Large Language Models for Verilog Code Generation
    Liu, Mingjie
    Pinckney, Nathaniel
    Khailany, Brucek
    Ren, Haoxing
    2023 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD, 2023,
  • [8] Exploring Personalized Neural Conversational Models
    Kottur, Satwik
    Wang, Xiaoyu
    Carvalho, Vitor
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3728 - 3734
  • [9] CodeScore: Evaluating Code Generation by Learning Code Execution
    Dong, Yihong
    Ding, Jiazheng
    Jiang, Xue
    Li, Ge
    Li, Zhuo
    Jin, Zhi
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (03)
  • [10] JavaBench: A Benchmark of Object-Oriented Code Generation for Evaluating Large Language Models
    Cao, Jialun
    Chen, Zhiyong
    Wu, Jiarong
    Cheung, Shing-Chi
    Xu, Chang
    Proceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024, : 870 - 882