Large Language Models in Computer Science Education: A Systematic Literature Review

被引:0
|
作者
Raihan, Nishat [1 ]
Siddiq, Mohammed Latif [2 ]
Santos, Joanna C. S. [2 ]
Zampieri, Marcos [1 ]
机构
[1] George Mason Univ, Fairfax, VA 22030 USA
[2] Univ Notre Dame, Notre Dame, IN USA
关键词
Large Language Models; Code Generation; CS Education;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Large language models (LLMs) are becoming increasingly better at a wide range of Natural Language Processing tasks (NLP), such as text generation and understanding. Recently, these models have extended their capabilities to coding tasks, bridging the gap between natural languages (NL) and programming languages (PL). Foundational models such as the Generative Pre-trained Transformer (GPT) and LLaMA series have set strong baseline performances in various NL and PL tasks. Additionally, several models have been fine-tuned specifically for code generation, showing significant improvements in code-related applications. Both foundational and fine-tuned models are increasingly used in education, helping students write, debug, and understand code. We present a comprehensive systematic literature review to examine the impact of LLMs in computer science and computer engineering education. We analyze their effectiveness in enhancing the learning experience, supporting personalized education, and aiding educators in curriculum development. We address five research questions to uncover insights into how LLMs contribute to educational outcomes, identify challenges, and suggest directions for future research.
引用
收藏
页码:938 / 944
页数:7
相关论文
共 50 条
  • [41] Using Large Language Models for Student-Code Guided Test Case Generation in Computer Science Education
    Kumar, Nischal Ashok
    Lan, Andrew S.
    AI FOR EDUCATION WORKSHOP, 2024, 257 : 170 - 178
  • [42] Serious Games in Science Education. A Systematic Literature Review
    Ullah M.
    Amin S.U.
    Munsif M.
    Safaev U.
    Khan H.
    Khan S.
    Ullah H.
    Virtual Reality and Intelligent Hardware, 2022, 4 (03): : 189 - 209
  • [43] Mobile learning in university science education: a systematic literature review
    Ly, Le Quan
    Kearney, Matthew
    IRISH EDUCATIONAL STUDIES, 2024, 43 (04) : 1287 - 1305
  • [44] Science education textbook research trends: a systematic literature review
    Vojir, Karel
    Rusek, Martin
    INTERNATIONAL JOURNAL OF SCIENCE EDUCATION, 2019, 41 (11) : 1496 - 1516
  • [45] Large language models in science
    Kowalewski, Karl-Friedrich
    Rodler, Severin
    UROLOGIE, 2024, 63 (09): : 860 - 866
  • [46] Design an Assessment for an Introductory Computer Science Course: A Systematic Literature Review
    Cheng, Qingwan
    Tao, Angela
    Chen, Huangliang
    Samary, Maira Marques
    2022 IEEE FRONTIERS IN EDUCATION CONFERENCE, FIE, 2022,
  • [47] Large language models in traditional Chinese medicine: a systematic review
    Chen, Zhe
    Wang, Hui
    Li, Chengxian
    Liu, Chunxiang
    Yang, Fengwen
    Zhang, Dong
    Fauci, Alice Josephine
    Zhang, Junhua
    ACUPUNCTURE AND HERBAL MEDICINE, 2025, 5 (01) : 57 - 67
  • [48] Large language models for generating medical examinations: systematic review
    Artsi, Yaara
    Sorin, Vera
    Konen, Eli
    Glicksberg, Benjamin S.
    Nadkarni, Girish
    Klang, Eyal
    BMC MEDICAL EDUCATION, 2024, 24 (01)
  • [49] Exploring the Role of Large Language Models in Melanoma: A Systematic Review
    Zarfati, Mor
    Nadkarni, Girish N.
    Glicksberg, Benjamin S.
    Harats, Moti
    Greenberger, Shoshana
    Klang, Eyal
    Soffer, Shelly
    JOURNAL OF CLINICAL MEDICINE, 2024, 13 (23)
  • [50] Examining the Role of Large Language Models in Orthopedics:Systematic Review
    Zhang, Cheng
    Liu, Shanshan
    Zhou, Xingyu
    Zhou, Siyu
    Tian, Yinglun
    Wang, Shenglin
    Xu, Nanfang
    Li, Weishi
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26