共 50 条
- [22] ComplexCodeEval: A Benchmark for Evaluating Large Code Models on More Complex Code Proceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024, : 1895 - 1906
- [24] Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models EXTENDED ABSTRACTS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2022, 2022,
- [25] Evaluating and optimising compiler code generation for NVIDIA Grace 53RD INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2024, 2024, : 691 - 700
- [26] Evaluating Code Comment Generation with Summarized API Docs Proceedings - 2023 IEEE/ACM 2nd International Workshop on Natural Language-Based Software Engineering, NLBSE 2023, 2023, : 60 - 63
- [27] Evaluating Code Comment Generation With Summarized API Docs 2023 IEEE/ACM 2ND INTERNATIONAL WORKSHOP ON NATURAL LANGUAGE-BASED SOFTWARE ENGINEERING, NLBSE, 2023, : 60 - 63
- [28] Evaluating Rewards for Question Generation Models 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 2278 - 2283
- [29] SystemC code generation from UML models SYSTEM SPECIFICATION AND DESIGN LANGUAGES: BEST OF FDL '02, 2003, : 161 - 171
- [30] ReCode: Robustness Evaluation of Code Generation Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 13818 - 13843