An error-analysis study from an EFL writing context: Human and Automated Essay Scoring Approaches

被引:11
作者
Almusharraf, Norah [1 ]
Alotaibi, Hind [2 ]
机构
[1] Prince Sultan Univ, Riyadh, Saudi Arabia
[2] King Saud Univ, Riyadh, Saudi Arabia
关键词
EFL; Writing; Correlation; Feedback; Automated essay scoring (AES); Human raters; FEEDBACK; RATERS;
D O I
10.1007/s10758-022-09592-z
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances were analyzed quantitatively using Corder's (1974) error analysis approach to categorize the writing errors in a corpus of 197 essays written by English as a foreign language (EFL) learners. Pearson correlation coefficient and paired sample t-tests were conducted to analyze and compare errors detected by both approaches. According to the study's results, a moderate correlation between human raters and AES in terms of the total scores and the number of errors detected. Results also indicated that the total number of errors detected by AES is significantly higher than human raters and that the latter tend to give students higher scores. The findings encourage a more open attitude towards AES systems to support EFL writing teachers in assessing students' work.
引用
收藏
页码:1015 / 1031
页数:17
相关论文
共 54 条