共 50 条
- [1] CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 13921 - 13937
- [2] Exploring and Evaluating Personalized Models for Code Generation PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1500 - 1508
- [5] Quantifying Contamination in Evaluating Code Generation Capabilities of Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 14116 - 14137
- [6] Invited Paper: VerilogEval: Evaluating Large Language Models for Verilog Code Generation 2023 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD, 2023,
- [8] JavaBench: A Benchmark of Object-Oriented Code Generation for Evaluating Large Language Models Proceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024, : 870 - 882
- [9] VHDL-Eval: A Framework for Evaluating Large Language Models in VHDL Code Generation 2024 IEEE LLM AIDED DESIGN WORKSHOP, LAD 2024, 2024,
- [10] Evaluating the Performance of Code Generation Models for Solving Parsons Problems With Small Prompt Variations PROCEEDINGS OF THE 2023 CONFERENCE ON INNOVATION AND TECHNOLOGY IN COMPUTER SCIENCE EDUCATION, ITICSE 2023, VOL 1, 2023, : 299 - 305