Preliminary Systematic Review of Open-Source Large Language Models in Education

被引:2
|
作者
Lin, Michael Pin-Chuan [1 ]
Chang, Daniel [2 ]
Hall, Sarah [1 ]
Jhajj, Gaganpreet [3 ]
机构
[1] Mt St Vincent Univ, Halifax, NS, Canada
[2] Simon Fraser Univ, Burnaby, BC, Canada
[3] Athabasca Univ, Athabasca, AB, Canada
关键词
Large Language Models; Open-Source; AI in Education; Educational Technology; Pedagogical Innovation;
D O I
10.1007/978-3-031-63028-6_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work-in-progress study aims to explore and analyze the growing impact of large language models (LLMs) in the fields of education and industry. We preliminarily review how LLMs can be integrated into educational contexts with their technical features, open-source nature, and applicability. Through a systematic search, we have identified a selection of open-source LLMs that have been released or significantly updated post-2021. This initial search indicates a thriving field with immense potential for both academic and industry applications. While LLMs hold promise for education, some challenges need to be addressed. These include limited application of open-source LLMs, concerns regarding data privacy, content accuracy, and potential biases. It is critical to carefully consider these factors before deploying LLMs in educational settings. However, our preliminary research highlights the versatility of LLMs in generating educational content and supporting diverse instructional strategies. This suggests a shift towards more adaptive and personalized learning environments. By assessing the suitability of these models for educational purposes, our study lays the foundation for future research aimed at fully maximizing the potential of open-source LLMs to transform teaching and learning practices. As our work progresses, we plan to expand our investigation to explore the broader implications of LLMs on educational outcomes and pedagogical contexts. Ultimately, our goal is to facilitate dynamic, inclusive, and effective learning experiences across various educational environments.
引用
收藏
页码:68 / 77
页数:10
相关论文
共 50 条
  • [1] Open-source large language models in medical education: Balancing promise and challenges
    Ray, Partha Pratim
    ANATOMICAL SCIENCES EDUCATION, 2024, 17 (06) : 1361 - 1362
  • [2] Re: Open-Source Large Language Models in Radiology
    Kooraki, Soheil
    Bedayat, Arash
    ACADEMIC RADIOLOGY, 2024, 31 (10) : 4293 - 4293
  • [3] Servicing open-source large language models for oncology
    Ray, Partha Pratim
    ONCOLOGIST, 2024,
  • [4] Open-Source Large Language Models in Radiology: A Review and Tutorialfor PracticalResearch and ClinicalDeployment
    Savage, Cody H.
    Kanhere, Adway
    Parekh, Vishwa
    Langlotz, Curtis P.
    Joshi, Anupam
    Huang, Heng
    Doo, Florence X.
    RADIOLOGY, 2025, 314 (01)
  • [5] A tutorial on open-source large language models for behavioral science
    Hussain, Zak
    Binz, Marcel
    Mata, Rui
    Wulff, Dirk U.
    BEHAVIOR RESEARCH METHODS, 2024, 56 (08) : 8214 - 8237
  • [6] Upgrading Academic Radiology with Open-Source Large Language Models
    Ray, Partha Pratim
    ACADEMIC RADIOLOGY, 2024, 31 (10) : 4291 - 4292
  • [7] Classifying Cancer Stage with Open-Source Clinical Large Language Models
    Chang, Chia-Hsuan
    Lucas, Mary M.
    Lu-Yao, Grace
    Yang, Christopher C.
    2024 IEEE 12TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS, ICHI 2024, 2024, : 76 - 82
  • [8] Comparison of Frontier Open-Source and Proprietary Large Language Models for Complex Diagnoses
    Buckley, Thomas A.
    Crowe, Byron
    Abdulnour, Raja-Elie E.
    Rodman, Adam
    Manrai, Arjun K.
    JAMA HEALTH FORUM, 2025, 6 (03):
  • [9] PharmaLLM: A Medicine Prescriber Chatbot Exploiting Open-Source Large Language Models
    Ayesha Azam
    Zubaira Naz
    Muhammad Usman Ghani Khan
    Human-Centric Intelligent Systems, 2024, 4 (4): : 527 - 544
  • [10] Automated Essay Scoring and Revising Based on Open-Source Large Language Models
    Song, Yishen
    Zhu, Qianta
    Wang, Huaibo
    Zheng, Qinhua
    IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, 2024, 17 : 1920 - 1930