Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments

被引:1
|
作者
Park, Jung Yeon [1 ,2 ]
Dedja, Klest [3 ]
Pliakos, Konstantinos [3 ]
Kim, Jinho [2 ,4 ,5 ]
Joo, Sean [2 ,6 ]
Cornillie, Frederik [2 ]
Vens, Celine [3 ]
Van den Noortgate, Wim [2 ]
机构
[1] George Mason Univ, Coll Educ & Human Dev, 4400 Univ Dr, Fairfax, VA 22030 USA
[2] Katholieke Univ Leuven, Fac Psychol & Educ Sci & Itec, Imec Res Grp, Campus KULAK,Etienne Sabbelaan 51, B-8500 Kortrijk, Belgium
[3] Katholieke Univ Leuven, Dept Publ Hlth & Primary Care & Itec, Imec Res Grp, Campus KULAK,Etienne Sabbelaan 51, B-8500 Kortrijk, Belgium
[4] Univ Seoul, Grad Sch Educ, 163 Seoulsiripdaero, Seoul 02504, South Korea
[5] Univ Seoul, Urban Bigdata AI Inst, 163 Seoulsiripdaero, Seoul 02504, South Korea
[6] Univ Kansas, Dept Educ Psychol, 1450 Jayhawk Blvd, Lawrence, KS 66045 USA
关键词
Item response theory; Explanatory item response model; Machine learning; Background information; Prediction performance; Educational assessment; CLASSIFIERS;
D O I
10.3758/s13428-022-01910-8
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
To obtain more accurate and robust feedback information from the students' assessment outcomes and to communicate it to students and optimize teaching and learning strategies, educational researchers and practitioners must critically reflect on whether the existing methods of data analytics are capable of retrieving the information provided in the database. This study compared and contrasted the prediction performance of an item response theory method, particularly the use of an explanatory item response model (EIRM), and six supervised machine learning (ML) methods for predicting students' item responses in educational assessments, considering student- and item-related background information. Each of seven prediction methods was evaluated through cross-validation approaches under three prediction scenarios: (a) unrealized responses of new students to existing items, (b) unrealized responses of existing students to new items, and (c) missing responses of existing students to existing items. The results of a simulation study and two real-life assessment data examples showed that employing student- and item-related background information in addition to the item response data substantially increases the prediction accuracy for new students or items. We also found that the EIRM is as competitive as the best performing ML methods in predicting the student performance outcomes for the educational assessment datasets.
引用
收藏
页码:2109 / 2124
页数:16
相关论文
共 50 条
  • [1] Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments
    Jung Yeon Park
    Klest Dedja
    Konstantinos Pliakos
    Jinho Kim
    Sean Joo
    Frederik Cornillie
    Celine Vens
    Wim Van den Noortgate
    Behavior Research Methods, 2023, 55 : 2109 - 2124
  • [2] Prediction of Essay Cohesion in Portuguese Based on Item Response Theory in Machine Learning
    Barreiros Rosa, Bruno Alexandre
    Oliveira, Hilario
    Mello, Rafael Ferreira
    ARTIFICIAL INTELLIGENCE IN EDUCATION: POSTERS AND LATE BREAKING RESULTS, WORKSHOPS AND TUTORIALS, INDUSTRY AND INNOVATION TRACKS, PRACTITIONERS, DOCTORAL CONSORTIUM AND BLUE SKY, AIED 2024, 2024, 2151 : 388 - 394
  • [3] Item Response Theory Based Ensemble in Machine Learning
    Chen, Ziheng
    Ahn, Hongshik
    INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2020, 17 (05) : 621 - 636
  • [4] Item Response Theory Based Ensemble in Machine Learning
    Ziheng Chen
    Hongshik Ahn
    International Journal of Automation and Computing, 2020, 17 : 621 - 636
  • [5] Item Response Theory Based Ensemble in Machine Learning
    Ziheng Chen
    Hongshik Ahn
    International Journal of Automation and Computing, 2020, (05) : 621 - 636
  • [6] Making Sense of Item Response Theory in Machine Learning
    Martinez-Plumed, Fernando
    Prudencio, Ricardo B. C.
    Martinez-Uso, Adolfo
    Hernandez-Orallo, Jose
    ECAI 2016: 22ND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, 285 : 1140 - 1148
  • [7] Methods for utilizing Item response theory with Coupled, Multiple-Response assessments
    Wilcox, Bethany R.
    Rainey, Katherine D.
    Vignal, Michael
    2022 PHYSICS EDUCATION RESEARCH CONFERENCE (PERC), 2022, : 488 - 493
  • [8] Performance of longitudinal item response theory models in shortened or partial assessments
    Arrington, Leticia
    Ueckert, Sebastian
    Ahamadi, Malidi
    Macha, Sreeraj
    Karlsson, Mats O.
    JOURNAL OF PHARMACOKINETICS AND PHARMACODYNAMICS, 2020, 47 (05) : 461 - 471
  • [9] Performance of longitudinal item response theory models in shortened or partial assessments
    Leticia Arrington
    Sebastian Ueckert
    Malidi Ahamadi
    Sreeraj Macha
    Mats O. Karlsson
    Journal of Pharmacokinetics and Pharmacodynamics, 2020, 47 : 461 - 471
  • [10] Use and Interpretation of Item Response Theory Applied to Machine Learning
    Dias, Jade
    Rodrigues, Caio Maia
    Rodrigues, Abner Cardoso
    COMPUTATIONAL NEUROSCIENCE, LAWCN 2021, 2022, 1519 : 15 - 24