A Comparison of the Relative Performance of Four IRT Models on Equating Passage-Based Tests

被引:4
|
作者
Kim, Kyung Yong [1 ]
Lim, Euijin [2 ]
Lee, Won-Chan [3 ]
机构
[1] Univ North Carolina Greensboro, Educ Res Methodol, Greensboro, NC 27412 USA
[2] Seoul Natl Univ, TEPS Ctr, Language Educ Inst, Seoul, South Korea
[3] Univ Iowa, CASMA, Iowa City, IA 52242 USA
关键词
equating; item response theory; bifactor model; testlet response theory model;
D O I
10.1080/15305058.2018.1530239
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
For passage-based tests, items that belong to a common passage often violate the local independence assumption of unidimensional item response theory (UIRT). In this case, ignoring local item dependence (LID) and estimating item parameters using a UIRT model could be problematic because doing so might result in inaccurate parameter estimates, which, in turn, could impact the results of equating. Under the random groups design, the main purpose of this article was to compare the relative performance of the three-parameter logistic (3PL), graded response (GR), bifactor, and testlet models on equating passage-based tests when various degrees of LID were present due to passage. Simulation results showed that the testlet model produced the most accurate equating results, followed by the bifactor model. The 3PL model worked as well as the bifactor and testlet models when the degree of LID was low but returned less accurate equating results than the two multidimensional models as the degree of LID increased. Among the four models, the polytomous GR model provided the least accurate equating results.
引用
收藏
页码:248 / 269
页数:22
相关论文
共 50 条
  • [1] Noncompensatory MIRT For Passage-Based Tests
    Kim, Nana
    Bolt, Daniel M.
    Wollack, James
    PSYCHOMETRIKA, 2022, 87 (03) : 992 - 1009
  • [2] Observed Score Equating Using Discrete and Passage-Based Anchor Items
    Zu, Jiyun
    Liu, Jinghua
    JOURNAL OF EDUCATIONAL MEASUREMENT, 2010, 47 (04) : 395 - 412
  • [3] Noncompensatory MIRT For Passage-Based Tests
    Nana Kim
    Daniel M. Bolt
    James Wollack
    Psychometrika, 2022, 87 : 992 - 1009
  • [4] Setting passing scores on passage-based tests: A comparison of traditional and single-passage bookmark methods
    Skaggs, Gary
    Hein, Serge F.
    Awuor, Risper
    APPLIED MEASUREMENT IN EDUCATION, 2007, 20 (04) : 405 - 426
  • [5] Utilizing passage-based language models for document retrieval
    Bendersky, Michael
    Kurland, Oren
    ADVANCES IN INFORMATION RETRIEVAL, 2008, 4956 : 162 - +
  • [6] Comparison of four IRT models when analyzing two tests for inductive reasoning
    de Koning, E
    Sijtsma, K
    Hamers, JHM
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2002, 26 (03) : 302 - 320
  • [7] Matching Business Process Models Using Positional Passage-Based Language Models
    Weidlich, Matthias
    Sheetrit, Eitam
    Branco, Moises C.
    Gal, Avigdor
    CONCEPTUAL MODELING, ER 2013, 2013, 8217 : 130 - +
  • [8] Utilizing passage-based language models for ad hoc document retrieval
    Bendersky, Michael
    Kurland, Oren
    INFORMATION RETRIEVAL, 2010, 13 (02): : 157 - 187
  • [9] Post-hoc IRT equating of previously administered English tests for comparison of test scores
    Saida, Chisato
    Hattori, Tamaki
    LANGUAGE TESTING, 2008, 25 (02) : 187 - 210
  • [10] Utilizing passage-based language models for ad hoc document retrieval
    Michael Bendersky
    Oren Kurland
    Information Retrieval, 2010, 13 : 157 - 187