Peer and Self Assessment in Massive Online Classes

被引:164
作者
Kulkarni, Chinmay [1 ]
Wei, Koh Pang [1 ,2 ]
Le, Huy [2 ]
Chia, Daniel [1 ,2 ]
Papadopoulos, Kathryn [1 ]
Cheng, Justin [1 ]
Koller, Daphne [1 ,2 ]
Klemmer, Scott R. [1 ,3 ]
机构
[1] Stanford Univ, Dept Comp Sci, HCI Grp, Stanford, CA 94305 USA
[2] Coursera Inc, Mountain View, CA 94040 USA
[3] Univ Calif San Diego, San Diego, CA 92103 USA
基金
美国国家科学基金会;
关键词
Peer assessment; self-assessment; MOOC; online education; massive online classroom; design assessment; qualitative feedback; design crit; studio-based learning; FEEDBACK; CREATIVITY; EDUCATION; EXPERT; STUDIO; MODEL;
D O I
10.1145/2505057
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students' grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers' work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.
引用
收藏
页数:31
相关论文
共 85 条
[11]   Evaluating an automatically scorable, open-ended response type for measuring mathematical reasoning in computer-adaptive tests [J].
Bennett, RE ;
Steffen, M ;
Singley, MK ;
Morley, M ;
Jacquemin, D .
JOURNAL OF EDUCATIONAL MEASUREMENT, 1997, 34 (02) :162-176
[12]  
Boud D., 1995, ENHANCING LEARNING S
[13]  
Boud D., 2000, STUDIES INCONTINUING, V22, P151167, DOI [10.1080/713695728, DOI 10.1080/713695728, https://doi.org/10.1080/713695728]
[14]  
Breslow L., 2013, RES PRACTICE ASSESSM, V8, P13, DOI DOI 10.19173/IRRODL.V18I5.3080
[15]  
Buxton Bill, 2007, Sketching User Experiences: Getting the Design Right and the Right Design
[16]  
Cadiz J. J., 2000, CSCW 2000. ACM 2000 Conference on Computer Supported Cooperative Work, P135, DOI 10.1145/358916.358984
[17]  
Carlson A., 2003, P FRONTIERS ED C, V2
[18]   Exiting the cleanroom: On ecological validity and ubiquitous computing [J].
Carter, Scott ;
Mankoff, Jennifer ;
Klemmer, Scott R. ;
Matthews, Tara .
HUMAN-COMPUTER INTERACTION, 2008, 23 (01) :47-99
[19]  
Cennamo K, 2011, SIGCSE 11: PROCEEDINGS OF THE 42ND ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, P649
[20]   The social psychological effects of feedback on the production of internet information pools [J].
Cheshire, Coye ;
Antin, Judd .
JOURNAL OF COMPUTER-MEDIATED COMMUNICATION, 2008, 13 (03) :705-727