Accuracy of Automated Written Expression Curriculum-Based Measurement Scoring

被引:3
作者
Mercer, Sterett H. [1 ]
Cannon, Joanna E. [1 ]
Squires, Bonita [2 ,5 ]
Guo, Yue [3 ]
Pinco, Ella [4 ]
机构
[1] Univ British Columbia, Educ & Counselling Psychol & Special Educ Dept, Vancouver, BC, Canada
[2] Univ British Columbia, Fac Educ, Vancouver, BC, Canada
[3] Univ British Columbia, Special Educ, Vancouver, BC, Canada
[4] Univ British Columbia, Vancouver, BC, Canada
[5] Dalhousie Univ, Fac Hlth, Halifax, NS, Canada
关键词
written expression; curriculum-based measurement; automated text evaluation; screening; progress monitoring; GRADES; 3; RELIABILITY; VALIDITY;
D O I
10.1177/0829573520987753
中图分类号
G44 [教育心理学];
学科分类号
0402 ; 040202 ;
摘要
We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1 to 12 who received 1:1 academic tutoring through a community-based organization completed narrative writing samples in the fall and spring across two academic years. The samples were evaluated using four automated and hand-calculated WE-CBM scoring metrics. Results indicated automated and hand-calculated scores were highly correlated at all four timepoints for counts of total words written (rs = 1.00), words spelled correctly (rs = .99-1.00), correct word sequences (CWS; rs = .96-.97), and correct minus incorrect word sequences (CIWS; rs = .86-.92). For CWS and CIWS, however, automated scores systematically overestimated hand-calculated scores, with an unacceptable amount of error for CIWS for some types of decisions. These findings provide preliminary evidence that aWE-CBM can be used to efficiently score narrative writing samples, potentially improving the feasibility of implementing multi-tiered systems of support in which the written expression skills of large numbers of students are screened and monitored.
引用
收藏
页码:304 / 317
页数:14
相关论文
共 30 条
[1]   Using human judgments to examine the validity of automated grammar, syntax, and mechanical errors in writing [J].
Crossley, Scott A. ;
Bradfield, Franklin ;
Bustamante, Analynn .
JOURNAL OF WRITING RESEARCH, 2019, 11 (02) :251-270
[2]   CURRICULUM-BASED MEASUREMENT - THE EMERGING ALTERNATIVE [J].
DENO, SL .
EXCEPTIONAL CHILDREN, 1985, 52 (03) :219-232
[3]   Identifying indicators of written expression proficiency for middle school students [J].
Espin, C ;
Shin, J ;
Deno, SL ;
Skare, S ;
Robinson, S ;
Benner, B .
JOURNAL OF SPECIAL EDUCATION, 2000, 34 (03) :140-153
[4]  
Espin C.A., 1999, READING WRITING Q, V15, P5, DOI DOI 10.1080/105735699278279
[5]   Data-Based Decision Making in Reading Interventions: A Synthesis and Meta-Analysis of the Effects for Struggling Readers [J].
Filderman, Marissa J. ;
Toste, Jessica R. ;
Didion, Lisa Anne ;
Peng, Peng ;
Clemens, Nathan H. .
JOURNAL OF SPECIAL EDUCATION, 2018, 52 (03) :174-187
[6]  
Gansle KA, 2002, SCHOOL PSYCHOL REV, V31, P477
[7]   COH-METRIX MEASURES TEXT CHARACTERISTICS AT MULTIPLE LEVELS OF LANGUAGE AND DISCOURSE [J].
Graesser, Arthur C. ;
McNamara, Danielle S. ;
Cai, Zhiqang ;
Conley, Mark ;
Li, Haiying ;
Pennebaker, James .
ELEMENTARY SCHOOL JOURNAL, 2014, 115 (02) :210-229
[8]  
Hastie T., 2009, ELEMENTS STAT LEARNI, V2, DOI 10.1007/978-0-387-84858-7
[9]  
Hosp MK., 2016, ABCS CBM PRACTICAL G, V2nd
[10]  
Jimerson S.R. J., 2016, Handbook of Response to Intervention The Science and Practice of Multi-Tiered Systems of Support: The Science and Practice of Multi-Tiered Systems of Support, V2nd, DOI [DOI 10.1007/978-1-4899-7568-3, 10.1007/978-1-4899-7568-3_15, 10.1007/s13197-020-04486-3, DOI 10.1007/S10567-013-0135-1]