Large Language Models in Computer Science Education: A Systematic Literature Review

被引:0
|
作者
Raihan, Nishat [1 ]
Siddiq, Mohammed Latif [2 ]
Santos, Joanna C. S. [2 ]
Zampieri, Marcos [1 ]
机构
[1] George Mason Univ, Fairfax, VA 22030 USA
[2] Univ Notre Dame, Notre Dame, IN USA
关键词
Large Language Models; Code Generation; CS Education;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Large language models (LLMs) are becoming increasingly better at a wide range of Natural Language Processing tasks (NLP), such as text generation and understanding. Recently, these models have extended their capabilities to coding tasks, bridging the gap between natural languages (NL) and programming languages (PL). Foundational models such as the Generative Pre-trained Transformer (GPT) and LLaMA series have set strong baseline performances in various NL and PL tasks. Additionally, several models have been fine-tuned specifically for code generation, showing significant improvements in code-related applications. Both foundational and fine-tuned models are increasingly used in education, helping students write, debug, and understand code. We present a comprehensive systematic literature review to examine the impact of LLMs in computer science and computer engineering education. We analyze their effectiveness in enhancing the learning experience, supporting personalized education, and aiding educators in curriculum development. We address five research questions to uncover insights into how LLMs contribute to educational outcomes, identify challenges, and suggest directions for future research.
引用
收藏
页码:938 / 944
页数:7
相关论文
共 50 条
  • [21] Intersectionality in language teacher education: a systematic literature review
    Tarrayo, Veronico N.
    LANGUAGE CULTURE AND CURRICULUM, 2025,
  • [22] Broadening Participation in Adult Education: A Literature Review of Computer Science Education
    Agbo, Friday Joseph
    PROCEEDINGS OF THE 55TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, SIGCSE 2024, VOL. 1, 2024, : 11 - 17
  • [23] Smartphone Usage in Science Education: A Systematic Literature Review
    Ubben, Malte S.
    Kremer, Fabienne E.
    Heinicke, Susanne
    Marohn, Annette
    Heusler, Stefan
    EDUCATION SCIENCES, 2023, 13 (04):
  • [24] Serious games in science education: a systematic literature review
    Mohib ULLAH
    Sareer Ul AMIN
    Muhammad MUNSIF
    Muhammad Mudassar YAMIN
    Utkurbek SAFAEV
    Habib KHAN
    Salman KHAN
    Habib ULLAH
    虚拟现实与智能硬件(中英文), 2022, 4 (03) : 189 - 209
  • [25] Gamification in Science Education. A Systematic Review of the Literature
    Kalogiannakis, Michail
    Papadakis, Stamatios
    Zourmpakis, Alkinoos-Ioannis
    EDUCATION SCIENCES, 2021, 11 (01): : 1 - 36
  • [26] Practical work in science education: a systematic literature review
    Oliveira, Hugo
    Bonito, Jorge
    FRONTIERS IN EDUCATION, 2023, 8
  • [27] Quantitative Evaluation of Using Large Language Models and Retrieval-Augmented Generation in Computer Science Education
    Wang, Kevin Shukang
    Lawrence, Ramon
    PROCEEDINGS OF THE 56TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, SIGCSE TS 2025, VOL 2, 2025, : 1183 - 1189
  • [28] Quantitative Evaluation of Using Large Language Models and Retrieval-Augmented Generation in Computer Science Education
    Wang, Kevin Shukang
    Lawrence, Ramon
    PROCEEDINGS OF THE 56TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, SIGCSE TS 2025, VOL 1, 2025, : 1183 - 1189
  • [29] The Application of Large Language Models in Gastroenterology: A Review of the Literature
    Maida, Marcello
    Celsa, Ciro
    Lau, Louis H. S.
    Ligresti, Dario
    Baraldo, Stefano
    Ramai, Daryl
    Di Maria, Gabriele
    Cannemi, Marco
    Facciorusso, Antonio
    Camma, Calogero
    CANCERS, 2024, 16 (19)
  • [30] English Language Learners in Computer Science Education: A Scoping Review
    Lei, Yinchen
    Allen, Meghan
    PROCEEDINGS OF THE 53RD ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION (SIGCSE 2022), VOL 1, 2022, : 57 - 63