Administrative and structural changes in student evaluations of teaching and their effects on overall instructor scores

被引:9
|
作者
Zipser, Nina [1 ]
Mincieli, Lisa [1 ]
机构
[1] Harvard Univ, Fac Arts & Sci, Off Fac Affairs, Cambridge, MA 02138 USA
关键词
Student evaluation of teaching; scale change; online survey administration; instructor effectiveness; ONLINE; RATINGS; VALIDITY; SCALES; IMPACT; PAPER;
D O I
10.1080/02602938.2018.1425368
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Using nine years of student evaluation of teaching (SET) data from a large US research university, we examine whether changes to the SET instrument have a substantial impact on overall instructor scores. Our study exploits four distinct natural experiments that arose when the SET instrument was changed. To maximise power, we compare the same course/instructor before and after each of the four changes occurred. We find that switching from in-class, paper course evaluations to online evaluations generates an average change of -0.14 points on a five-point scale, or 0.25 standard deviations (SDs) in the overall instructor ratings. Changing labelling of the scale and the wording of the overall instructor question generates another decrease in the average rating: -0.15 of a point (0.27 SDs). In contrast, extending the evaluation period to include the final examination and offering an incentive (early grade release) for completing the evaluations do not have a statistically significant effect on the overall instructor rating. The cumulative impact of these individual changes is -0.29 points (0.52 SDs). This large decrease shows that SET scores are not comparable over time when instruments change. Therefore, administrators should measure and account for such changes when using historical benchmarks for evaluative purposes (e.g. appointments and compensation).
引用
收藏
页码:995 / 1008
页数:14
相关论文
共 28 条