Automatic Multiple-Choice Question Generation from Thai Text

被引:0
|
作者
Kwankajornkiet, Chonlathorn [1 ]
Suchato, Atiwong [1 ]
Punyabukkana, Proadpran [1 ]
机构
[1] Chulalongkorn Univ, Dept Comp Engn, Bangkok 10500, Thailand
关键词
automatic question generation; ranking; Word-Net; dictionary based approach;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a method for generating fillin- the-blank questions with multiple choices from Thai text for testing reading comprehension. The proposed method starts from segmenting input text into clauses by tagging part-of-speech of all words and identifying sentence-breaking spaces. All question phrases are then generated by selecting every tagged-as-noun word as a possible answer. Then, distractors of a question are retrieved by considering all words having the same category with the answer to be distractors. Finally, all generated question phrases and distractors are scored by linear regression models and then ranked to get the most acceptable question phrases and distractors. Custom dictionary is added as an option of the proposed method. The experiment results showed that 81.32% of question phrases generated when a custom dictionary was utilized was rated as acceptable. However, only 49.32% of questions with acceptable question phrases have at least one acceptable distractor. The results also indicated that the ranking process and a custom dictionary can improve acceptability rate of generated questions and distractors.
引用
收藏
页码:308 / 313
页数:6
相关论文
共 50 条
  • [41] Automatic Generation of Multiple-choice Cloze-test Questions for Lao Language Learning
    Qiu, Xinying
    Xue, Haiwei
    Liang, Lingfeng
    Xie, Zexin
    Liao, Shuxuan
    Shi, Guofeng
    2021 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2021, : 125 - 130
  • [42] End-to-End generation of Multiple-Choice questions using Text-to-Text transfer Transformer models
    Rodriguez-Torrealba, Ricardo
    Garcia-Lopez, Eva
    Garcia-Cabot, Antonio
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [43] Notes From the Field: Automatic Item Generation, Standard Setting, and Learner Performance in Mastery Multiple-Choice Tests
    Shappell, Eric
    Podolej, Gregory
    Ahn, James
    Tekian, Ara
    Park, Yoon Soo
    EVALUATION & THE HEALTH PROFESSIONS, 2021, 44 (03) : 315 - 318
  • [44] LEARNING IN AN AUTOMATIC MULTIPLE-CHOICE BOX WITH LIGHT AS INCENTIVE
    FLYNN, JP
    JEROME, EA
    JOURNAL OF COMPARATIVE AND PHYSIOLOGICAL PSYCHOLOGY, 1952, 45 (05): : 336 - 340
  • [45] Automatic Chinese Multiple Choice Question Generation Using Mixed Similarity Strategy
    Liu, Ming
    Rus, Vasile
    Liu, Li
    IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, 2018, 11 (02): : 193 - 202
  • [46] Automated Generation and Tagging of Knowledge Components from Multiple-Choice Questions
    Moore, Steven
    Schmucker, Robin
    Mitchell, Tom
    Stamper, John
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON LEARNING@SCALE, L@S 2024, 2024, : 122 - 133
  • [47] MCQ4 - A PROGRAM FOR MULTIPLE-CHOICE QUESTION EVALUATION
    CORBETT, M
    MEDICAL EDUCATION, 1986, 20 (01) : 77 - 77
  • [48] Study of Summative Evaluation by New Multiple-choice Question Format
    Ohshima, Naoki
    Ishimatsu, Jun
    PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON INNOVATION AND MANAGEMENT, VOLS I & II, 2017, : 991 - 997
  • [49] Cross-lingual Training for Multiple-Choice Question Answering
    Echegoyen, Guillermo
    Rodrigo, Alvaro
    Penas, Anselmo
    PROCESAMIENTO DEL LENGUAJE NATURAL, 2020, (65): : 37 - 44
  • [50] Development and validation of a multiple-choice question paper in basic colonoscopy
    Thomas-Gibson, S
    Saunders, BP
    ENDOSCOPY, 2005, 37 (09) : 821 - 826