Exploring and Evaluating Personalized Models for Code Generation

被引:0
|
作者
Zlotchevski, Andrei [1 ]
Drain, Dawn [2 ]
Svyatkovskiy, Alexey [3 ]
Clement, Colin [3 ]
Sundaresan, Neel [3 ]
Tufano, Michele [3 ]
机构
[1] McGill University, Montreal,QC, Canada
[2] Anthropic, San Francisco,CA, United States
[3] Microsoft, Redmond,WA, United States
来源
arXiv | 2022年
关键词
Compilation and indexing terms; Copyright 2024 Elsevier Inc;
D O I
暂无
中图分类号
学科分类号
摘要
Learning systems - Modeling languages - Natural language processing systems - Scattering parameters - Software testing
引用
收藏
相关论文
共 50 条
  • [21] ComplexCodeEval: A Benchmark for Evaluating Large Code Models on More Complex Code
    Feng, Jia
    Liu, Jiachen
    Gao, Cuiyun
    Chong, Chun Yong
    Wang, Chaozheng
    Gao, Shan
    Xia, Xin
    arXiv, 2024,
  • [22] ComplexCodeEval: A Benchmark for Evaluating Large Code Models on More Complex Code
    Feng, Jia
    Liu, Jiachen
    Gao, Cuiyun
    Chong, Chun Yong
    Wang, Chaozheng
    Gao, Shan
    Xia, Xin
    Proceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024, : 1895 - 1906
  • [23] L2CEval: Evaluating Language-to-Code Generation Capabilities of Large Language Models
    Ni, Ansong
    Yin, Pengcheng
    Zhao, Yilun
    Riddell, Martin
    Feng, Troy
    Shen, Rui
    Yin, Stephen
    Liu, Ye
    Yavuz, Semih
    Xiong, Caiming
    Joty, Shafiq
    Zhou, Yingbo
    Radev, Dragomir
    Cohan, Arman
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 1311 - 1329
  • [24] Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models
    Vaithilingam, Priyan
    Zhang, Tianyi
    Glassman, Elena L.
    EXTENDED ABSTRACTS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2022, 2022,
  • [25] Evaluating and optimising compiler code generation for NVIDIA Grace
    Jesus, Ricardo
    Weiland, Michele
    53RD INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2024, 2024, : 691 - 700
  • [26] Evaluating Code Comment Generation with Summarized API Docs
    Matmti, Bilel
    Fard, Fatemeh
    Proceedings - 2023 IEEE/ACM 2nd International Workshop on Natural Language-Based Software Engineering, NLBSE 2023, 2023, : 60 - 63
  • [27] Evaluating Code Comment Generation With Summarized API Docs
    Matmti, Bilel
    Fard, Fatemeh
    2023 IEEE/ACM 2ND INTERNATIONAL WORKSHOP ON NATURAL LANGUAGE-BASED SOFTWARE ENGINEERING, NLBSE, 2023, : 60 - 63
  • [28] Evaluating Rewards for Question Generation Models
    Hosking, Tom
    Riedel, Sebastian
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 2278 - 2283
  • [29] SystemC code generation from UML models
    Baresi, L
    Bruschi, F
    Di Nitto, E
    Sciuto, D
    SYSTEM SPECIFICATION AND DESIGN LANGUAGES: BEST OF FDL '02, 2003, : 161 - 171
  • [30] ReCode: Robustness Evaluation of Code Generation Models
    Wang, Shiqi
    Li, Zheng
    Qian, Haifeng
    Yang, Chenghao
    Wang, Zijian
    Shang, Mingyue
    Kumar, Varun
    Tan, Samson
    Ray, Baishakhi
    Bhatia, Parminder
    Nallapati, Ramesh
    Ramanathan, Murali Krishna
    Roth, Dan
    Xiang, Bing
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 13818 - 13843