Assessing Computational Thinking Pedagogy in Serious Games Through Questionnaires, Think-aloud Testing, and Automated Data Logging

被引:0
|
作者
Fanfarelli, Joey R. [1 ]
机构
[1] Marist Coll, Games & Emerging Media, Poughkeepsie, NY 12601 USA
关键词
serious games; game; gaming; computational thinking; algorithmic thinking; test; testing; assessment; development;
D O I
10.1109/ICISFALL51598.2021.9627365
中图分类号
学科分类号
摘要
Computational thinking is an important skill for solving complex problems, including processes such as decomposition, pattern recognition, abstraction, and algorithmic design. Game-based learning has recently seen an increase in prevalence for teaching computational thinking, making games an important topic of study. However, there is currently no validated tool for assessing Computational Thinking (CT) that performs reliably across disciplines and age groups. In the absence of such a tool, this paper examines several software testing methods for the evaluation of CT pedagogy effectiveness within serious games. Namely, it makes recommendations for the application of standardized questionnaires, think-aloud testing, and automated data logging for evaluating games that promote CT learning. It concludes with a potential use case to demonstrate how the methods can be combined to achieve a granular and actionable understanding of a complex CT assessment problem and its causes.
引用
收藏
页码:149 / 152
页数:4
相关论文
empty
未找到相关数据