End-to-End generation of Multiple-Choice questions using Text-to-Text transfer Transformer models

被引:36
|
作者
Rodriguez-Torrealba, Ricardo [1 ]
Garcia-Lopez, Eva [1 ]
Garcia-Cabot, Antonio [1 ]
机构
[1] Univ Alcala, Dept Ciencias Comp, Alcala De Henares 28801, Madrid, Spain
关键词
Multiple-Choice Question Generation; Distractor Generation; Question Answering; Question Generation; Reading Comprehension;
D O I
10.1016/j.eswa.2022.118258
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing worldwide adoption of e-learning tools and widespread increase of online education has brought multiple challenges, including the ability of generating assessments at the scale and speed demanded by this environment. In this sense, recent advances in language models and architectures like the Transformer, provide opportunities to explore how to assist educators in these tasks. This study focuses on using neural language models for the generation of questionnaires composed of multiple-choice questions, based on English Wikipedia articles as input. The problem is addressed using three dimensions: Question Generation (QG), Question Answering (QA), and Distractor Generation (DG). A processing pipeline based on pre-trained T5 language models is designed and a REST API is implemented for its use. The DG task is defined using a Text-To-Text format and a T5 model is fine-tuned on the DG-RACE dataset, showing an improvement to ROUGE-L metric compared to the reference for the dataset. A discussion about the lack of an adequate metric for DG is presented and the cosine similarity using word embeddings is considered as a complement. Questionnaires are evaluated by human ex-perts reporting that questions and options are generally well formed, however, they are more oriented to measuring retention than comprehension.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] SText-DETR: End-to-End Arbitrary-Shaped Text Detection with Scalable Query in Transformer
    Liao, Pujin
    Wang, Zengfu
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 481 - 492
  • [32] Transforming Scene Text Detection and Recognition: A Multi-Scale End-to-End Approach With Transformer Framework
    Geng, Tianyu
    IEEE ACCESS, 2024, 12 : 40582 - 40596
  • [33] T5G2P: Using Text-to-Text Transfer Transformer for Grapheme-to-Phoneme Conversion
    Rezackova, Marketa
    Svec, Jan
    Tihelka, Daniel
    INTERSPEECH 2021, 2021, : 6 - 10
  • [34] Text-only domain adaptation for end-to-end ASR using integrated text-to-mel-spectrogram generator
    Bataev, Vladimir
    Korostik, Roman
    Shabalin, Evgeny
    Lavrukhin, Vitaly
    Ginsburg, Boris
    INTERSPEECH 2023, 2023, : 2928 - 2932
  • [35] SpecTextor: End-to-End Attention-based Mechanism for Dense Text Generation in Sports Journalism
    Ghosh, Indrajeet
    Ivler, Matthew
    Ramamurthy, Sreenivasan Ramasamy
    Roy, Nirmalya
    2022 IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING (SMARTCOMP 2022), 2022, : 362 - 367
  • [36] Using cognitive models to develop quality multiple-choice questions
    Pugh, Debra
    De Champlain, Andre
    Gierl, Mark
    Lai, Hollis
    Touchie, Claire
    MEDICAL TEACHER, 2016, 38 (08) : 838 - 843
  • [37] End-to-end text-to-speech synthesis with unaligned multiple language units based on attention
    Aso, Masashi
    Takamichi, Shinnosuke
    Saruwatari, Hiroshi
    INTERSPEECH 2020, 2020, : 4009 - 4013
  • [38] End-to-end text-dependent speaker verification using novel distance measures
    Dey, Subhadeep
    Madikeri, Srikanth
    Motlicek, Petr
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 3598 - 3602
  • [39] End-to-end Handwritten Chinese Paragraph Text Recognition Using Residual Attention Networks
    Wang, Yintong
    Yang, Yingjie
    Chen, Haiyan
    Zheng, Hao
    Chang, Heyou
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2022, 34 (01): : 371 - 388
  • [40] AN END-TO-END CHINESE TEXT NORMALIZATION MODEL BASED ON RULE-GUIDED FLAT-LATTICE TRANSFORMER
    Dai, Wenlin
    Song, Changhe
    Li, Xiang
    Wu, Zhiyong
    Pan, Huashan
    Li, Xiulin
    Meng, Helen
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7122 - 7126