Automatic story and item generation for reading comprehension assessments with transformers

被引:3
|
作者
Bulut, Okan [1 ]
Yildirim-Erbasli, Seyma Nur [2 ]
机构
[1] Univ Alberta, Ctr Res Appl Measurement & Evaluat, Edmonton, AB, Canada
[2] Concordia Univ Edmonton, Fac Arts, Dept Psychol, Edmonton, AB, Canada
关键词
Reading comprehension; Natural language processing; Automatic item generation; Language modeling; Text generation; LITERACY; DIFFICULTY; SCHOOLS;
D O I
10.21449/ijate.1124382
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Reading comprehension is one of the essential skills for students as they make a transition from learning to read to reading to learn. Over the last decade, the increased use of digital learning materials for promoting literacy skills (e.g., oral fluency and reading comprehension) in K-12 classrooms has been a boon for teachers. However, instant access to reading materials, as well as relevant assessment tools for evaluating students' comprehension skills, remains to be a problem. Teachers must spend many hours looking for suitable materials for their students because high-quality reading materials and assessments are primarily available through commercial literacy programs and websites. This study proposes a promising solution to this problem by employing an artificial intelligence (AI) approach. We demonstrate how to use advanced language models (e.g., OpenAI's GPT-2 and Google's T5) to automatically generate reading passages and items. Our preliminary findings suggest that with additional training and fine-tuning, open-source language models could be used to support the instruction and assessment of reading comprehension skills in the classroom. For both automatic story and item generation, the language models performed reasonably; however, the outcomes of these language models still require a human evaluation and further adjustments before sharing them with students. Practical implications of the findings and future research directions are discussed.
引用
收藏
页码:72 / 87
页数:16
相关论文
共 50 条
  • [1] Automatic generation of short answer questions for reading comprehension assessment
    Huang, Yan
    He, Lianzhen
    NATURAL LANGUAGE ENGINEERING, 2016, 22 (03) : 457 - 489
  • [2] Automatic Generation of Summaries and Questions to Support the Reading Comprehension Process
    Contreras-Arguello, Miriam Lizbeth
    Paredes-Valverde, Mario Andres
    Trinidad Vasquez, Aurelio Miguel
    Salas-Zarate, Maria del Pilar
    TECHNOLOGIES AND INNOVATION, CITI 2024, 2025, 2276 : 81 - 92
  • [3] Manipulating processing difficulty of reading comprehension questions: The feasibility of verbal item generation
    Gorin, JS
    JOURNAL OF EDUCATIONAL MEASUREMENT, 2005, 42 (04) : 351 - 373
  • [4] Automatic item generation: foundations and machine learning-based approaches for assessments
    Circi, Ruhan
    Hicks, Juanita
    Sikali, Emmanuel
    FRONTIERS IN EDUCATION, 2023, 8
  • [5] The interactive reading task: Transformer-based automatic item generation
    Attali, Yigal
    Runge, Andrew
    LaFlair, Geoffrey T.
    Yancey, Kevin
    Goodwin, Sarah
    Park, Yena
    von Davier, Alina A.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [6] Automatic Generation of Cloze Items for Repeated Testing to Improve Reading Comprehension
    Yang, Albert C. M.
    Chen, Irene Y. L.
    Flanagan, Brendan
    Ogata, Hiroaki
    EDUCATIONAL TECHNOLOGY & SOCIETY, 2021, 24 (03): : 147 - 158
  • [7] AUTOMATIC DECODING AND READING COMPREHENSION
    SAMUELS, SJ
    LANGUAGE ARTS, 1976, 53 (03) : 323 - 325
  • [8] ITEM BIAS IN A TEST OF READING-COMPREHENSION
    LINN, RL
    LEVINE, MV
    HASTINGS, CN
    WARDROP, JL
    APPLIED PSYCHOLOGICAL MEASUREMENT, 1981, 5 (02) : 159 - 173
  • [9] The Role of Item Models in Automatic Item Generation
    Gierl, Mark J.
    Lai, Hollis
    INTERNATIONAL JOURNAL OF TESTING, 2012, 12 (03) : 273 - 298