Quantitative analysis of the rubric as an assessment tool: an empirical study of student peer-group rating

被引:123
作者
Hafner, OC [1 ]
Hafner, P
机构
[1] Occidental Coll, Moore Lab Zool, Los Angeles, CA 90041 USA
[2] Occidental Coll, Dept Biol, Los Angeles, CA 90041 USA
[3] Pasadena Unified Sch Dist, San Rafael Elementary Sch, Pasadena, CA 91105 USA
基金
美国国家科学基金会;
关键词
D O I
10.1080/0950069022000038268
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Although the rubric has emerged as one of the most popular assessment tools in progressive educational programs, there is an unfortunate dearth of information in the literature quantifying the actual effectiveness of the rubric as an assessment tool in the hands of the students. This study focuses on the validity and reliability of the rubric as an assessment tool for student peer-group evaluation in an effort to further explore the use and effectiveness of the rubric. A total of 1577 peer-group ratings using a rubric for an oral presentation was used in this 3-year study involving 107 college biology students. A quantitative analysis of the rubric used in this study shows that it is used consistently by both students and the instructor across the study years. Moreover, the rubric appears to be 'gender neutral' and the students' academic strength has no significant bearing on the way that they employ the rubric. A significant, one-to-one relationship (slope = 1.0) between the instructor's assessment and the students' rating is seen across all years using the rubric. A generalizability study yields estimates of inter-rater reliability of moderate values across all years and allows for the estimation of variance components. Taken together, these data indicate that the general form and evaluative criteria of the rubric are clear and that the rubric is a useful assessment tool for peer-group (and self-) assessment by students. To our knowledge, these data provide the first statistical documentation of the validity and reliability of the rubric for student peer-group assessment.
引用
收藏
页码:1509 / 1528
页数:20
相关论文
共 37 条
[1]   A LATENT-VARIABLE MODELING APPROACH TO ASSESSING INTERRATER RELIABILITY, TOPIC GENERALIZABILITY, AND VALIDITY OF A CONTENT ASSESSMENT SCORING RUBRIC [J].
ABEDI, J ;
BAKER, EL .
EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1995, 55 (05) :701-715
[2]   PREDICTING GRADUATE ACADEMIC-SUCCESS FROM UNDERGRADUATE ACADEMIC-PERFORMANCE - A CANONICAL CORRELATION STUDY [J].
ABEDI, J .
EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1991, 51 (01) :151-160
[3]  
[Anonymous], 1972, The dependability of behaviourial measurements: Theory of generalzsability for scores and profiles
[4]  
[Anonymous], 1983, Statistical methods
[5]  
[Anonymous], [No title captured], DOI DOI 10.2307/2347491
[6]  
[Anonymous], 2007, Biostatistical analysis
[7]   Dimensionality and generalizability of domain-independent performance assessments [J].
Baker, EL ;
Abedi, J ;
Linn, RL ;
Niemi, D .
JOURNAL OF EDUCATIONAL RESEARCH, 1996, 89 (04) :197-205
[8]   PERFORMANCE VERSUS OBJECTIVE TESTING AND GENDER - AN EXPLORATORY-STUDY OF AN ADVANCED PLACEMENT HISTORY EXAMINATION [J].
BRELAND, HM ;
DANOS, DO ;
KAHN, HD ;
KUBOTA, MY ;
BONNER, MW .
JOURNAL OF EDUCATIONAL MEASUREMENT, 1994, 31 (04) :275-293
[9]  
BRELAND HM, 1982, J EDUC PSYCHOL, V74, P713
[10]  
BRENNAN RL, 1992, ED MEASUREMENT I WIN, P27