Analysis of LLMs for educational question classification and generation

被引:0
|
作者
Al Faraby, Said [1 ]
Romadhony, Ade [1 ]
Adiwijaya [1 ]
机构
[1] School of Computing, Telkom University, Jl. Telekomunikasi No.1, Terusan Buah Batu,Bandung,40257, Indonesia
关键词
Contrastive Learning - Question answering;
D O I
10.1016/j.caeai.2024.100298
中图分类号
学科分类号
摘要
Large language models (LLMs) like ChatGPT have shown promise in generating educational content, including questions. This study evaluates the effectiveness of LLMs in classifying and generating educational-type questions. We assessed ChatGPT's performance using a dataset of 4,959 user-generated questions labeled into ten categories, employing various prompting techniques and aggregating results with a voting method to enhance robustness. Additionally, we evaluated ChatGPT's accuracy in generating type-specific questions from 100 reading sections sourced from five online textbooks, which were manually reviewed by human evaluators. We also generated questions based on learning objectives and compared their quality to those crafted by human experts, with evaluations by experts and crowdsourced participants. Our findings reveal that ChatGPT achieved a macro-average F1-score of 0.57 in zero-shot classification, improving to 0.70 when combined with a Random Forest classifier using embeddings. The most effective prompting technique was zero-shot with added definitions, while few-shot and few-shot + Chain of Thought approaches underperformed. The voting method enhanced robustness in classification. In generating type-specific questions, ChatGPT's accuracy was lower than anticipated. However, quality differences between ChatGPT-generated and human-generated questions were not statistically significant, indicating ChatGPT's potential for educational content creation. This study underscores the transformative potential of LLMs in educational practices. By effectively classifying and generating high-quality educational questions, LLMs can reduce the workload on educators and enable personalized learning experiences. © 2024
引用
收藏
相关论文
共 50 条
  • [1] LLMs Performance in Answering Educational Questions in Brazilian Portuguese: A Preliminary Analysis on LLMs Potential to Support Diverse Educational Needs
    Rodrigues, Luiz
    Xavier, Cleon
    Costa, Newarney
    Batista, Hyan
    Bagnhuk Silva, Luiz Felipe
    de Melo, Weslei Chaleghi
    Gasevic, Dragan
    Mello, Rafael Ferreira
    FIFTEENTH INTERNATIONAL CONFERENCE ON LEARNING ANALYTICS & KNOWLEDGE, LAK 2025, 2025, : 865 - 871
  • [2] Investigating Educational and Noneducational Answer Selection for Educational Question Generation
    Steuer, Tim
    Filighera, Anna
    Tregel, Thomas
    IEEE ACCESS, 2022, 10 : 63522 - 63531
  • [3] Diverse Content Selection for Educational Question Generation
    Hadifar, Amir
    Bitew, Semere Kiros
    Deleu, Johannes
    Hoste, Veronique
    Develder, Chris
    Demesteer, Thomas
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 123 - 133
  • [4] Towards Enriched Controllability for Educational Question Generation
    Leite, Bernardo
    Cardoso, Henrique Lopes
    ARTIFICIAL INTELLIGENCE IN EDUCATION, AIED 2023, 2023, 13916 : 786 - 791
  • [5] Automating Question Generation From Educational Text
    Bhowmick, Ayan Kumar
    Jagmohan, Ashish
    Vempaty, Aditya
    Dey, Prasenjit
    Hall, Leigh
    Hartman, Jeremy
    Kokku, Ravi
    Maheshwari, Hema
    ARTIFICIAL INTELLIGENCE XL, AI 2023, 2023, 14381 : 437 - 450
  • [6] LLMs for science: Usage for code generation and data analysis
    Nejjar, Mohamed
    Zacharias, Luca
    Stiehle, Fabian
    Weber, Ingo
    JOURNAL OF SOFTWARE-EVOLUTION AND PROCESS, 2025, 37 (01)
  • [7] Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation
    Yuan, Xingdi
    Wang, Tong
    Wang, Yen-Hsiang
    Fine, Emery
    Abdelgham, Rania
    Sauzeon, Helene
    Oudeyer, Pierre-Yves
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 12952 - 12965
  • [8] From Question Generation to Problem Mining and Classification
    Sychev, Oleg
    2022 INTERNATIONAL CONFERENCE ON ADVANCED LEARNING TECHNOLOGIES (ICALT 2022), 2022, : 304 - 305
  • [9] Automatic Educational Question Generation with Difficulty Level Controls
    Jiao, Ying
    Shridhar, Kumar
    Cui, Peng
    Zhou, Wangchunshu
    Sachan, Mrinmaya
    ARTIFICIAL INTELLIGENCE IN EDUCATION, AIED 2023, 2023, 13916 : 476 - 488
  • [10] A Systematic Review of Automatic Question Generation for Educational Purposes
    Kurdi, Ghader
    Leo, Jared
    Parsia, Bijan
    Sattler, Uli
    Al-Emari, Salam
    INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2020, 30 (01) : 121 - 204