Automatic Unit Test Code Generation Using Large Language Models

被引:0
|
作者
Ocal, Akdeniz Kutay [1 ]
Keskinoz, Mehmet [1 ]
机构
[1] Istanbul Tech Univ, Bilgisayar Muhendisligi Bolumu, Istanbul, Turkiye
关键词
software testing; unit test generation; large language models; automatic test generation;
D O I
10.1109/SIU61531.2024.10600772
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study aimed to automate the production of unit tests, a critical component of the software development process. By using pre-trained Large Language Models, manual effort and training costs were reduced, and test production capacity was increased. Instead of directly feeding the test functions obtained from the Java projects to be tested into the model, the project was analyzed to extract additional information. The data obtained from this analysis were used to create an effective prompt template. Furthermore, the sources of the problematic tests produced were identified, and this information was fed back into the model, enabling it to autonomously correct the errors. The results of the study showed that the model was able to generate tests covering %55.58 of the functions collected from Java projects across different domains and that re-feeding the model with the generated erroneous tests resulted in a %29.3 improvement in the number of executable tests.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models
    Sarsa, Sami
    Denny, Paul
    Hellas, Arto
    Leinonen, Juho
    PROCEEDINGS OF THE 2022 ACM CONFERENCE ON INTERNATIONAL COMPUTING EDUCATION RESEARCH, ICER 2022, VOL. 1, 2023, : 27 - 43
  • [2] On the Evaluation of Large Language Models in Unit Test Generation
    Yang, Lin
    Yang, Chen
    Gao, Shutao
    Wang, Weijing
    Wang, Bo
    Zhu, Qihao
    Chu, Xiao
    Zhou, Jianyi
    Liang, Guangtai
    Wang, Qianxiang
    Chen, Junjie
    arXiv,
  • [3] On the Evaluation of Large Language Models in Unit Test Generation
    Yang, Lin
    Yang, Chen
    Gao, Shutao
    Wang, Weijing
    Wang, Bo
    Zhu, Qihao
    Chu, Xiao
    Zhou, Jianyi
    Liang, Guangtai
    Wang, Qianxiang
    Chen, Junjie
    Proceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024, : 1607 - 1619
  • [4] An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation
    Schafer, Max
    Nadi, Sarah
    Eghbali, Aryaz
    Tip, Frank
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (01) : 85 - 105
  • [5] FormalEval: A Method for Automatic Evaluation of Code Generation via Large Language Models
    Yang, Sichao
    Yang, Ye
    2024 INTERNATIONAL SYMPOSIUM OF ELECTRONICS DESIGN AUTOMATION, ISEDA 2024, 2024, : 660 - 665
  • [6] Automatic code generation using unified modeling language activity and sequence models
    Viswanathan, Sunitha Edacheril
    Samuel, Philip
    IET SOFTWARE, 2016, 10 (06) : 164 - 172
  • [7] Automated Unit Test Improvement using Large Language Models at Meta
    Alshahwan, Nadia
    Chheda, Jubin
    Finogenova, Anastasia
    Gokkaya, Beliz
    Harman, Mark
    Harper, Inna
    Marginean, Alexandru
    Sengupta, Shubho
    Wang, Eddy
    COMPANION PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, FSE COMPANION 2024, 2024, : 185 - 196
  • [8] Using Large Language Models for Student-Code Guided Test Case Generation in Computer Science Education
    Kumar, Nischal Ashok
    Lan, Andrew S.
    arXiv, 1600,
  • [9] Using Large Language Models for Student-Code Guided Test Case Generation in Computer Science Education
    Kumar, Nischal Ashok
    Lan, Andrew S.
    AI FOR EDUCATION WORKSHOP, 2024, 257 : 170 - 178
  • [10] Towards Automatic and Flexible Unit Test Generation for Legacy HPC Code
    Hovy, Christian
    Kunkel, Julian
    PROCEEDINGS OF SE-HPCCSE 2016: 4TH INTERNATIONAL WORKSHOP ON SOFTWARE ENGINEERING OR HIGH PERFORMANCE COMPUTING IN COMPUTATIONAL SCIENCE AND ENGINEERING, 2016, : 1 - 8