Automated Writing Evaluation and Feedback: Multiple Metrics of Efficacy

被引:79
作者
Wilson, Joshua [1 ]
Roscoe, Rod D. [2 ]
机构
[1] Univ Delaware, Sch Educ, 213E Willard Hall Educ Bldg, Newark, DE 19716 USA
[2] Arizona State Univ Polytech, Human Syst Engn, Mesa, AZ USA
关键词
automated writing evaluation; interactive learning environments; automated feedback; writing; writing self-efficacy; ESSAY EVALUATION SOFTWARE; OF-THE-ART; SELF-EFFICACY; TEACHER FEEDBACK; SOCIAL VALIDITY; STUDENTS; TECHNOLOGY; BELIEFS; PERCEPTIONS; PERFORMANCE;
D O I
10.1177/0735633119830764
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
The present study extended research on the effectiveness of automated writing evaluation (AWE) systems. Sixth graders were randomly assigned by classroom to an AWE condition that used Project Essay Grade Writing (n = 56) or a word-processing condition that used Google Docs (n = 58). Effectiveness was evaluated using multiple metrics: writing self-efficacy, holistic writing quality, performance on a state English language arts test, and teachers' perceptions of AWE's social validity. Path analyses showed that after controlling for pretest measures, composing condition had no effect on holistic writing quality, but students in the AWE condition had more positive writing self-efficacy and better performance on the state English language arts test. Posttest writing self-efficacy partially mediated the effect of composing condition on state test performance. Teachers reported positive perceptions of AWE's social validity. Results emphasize the importance of using multiple metrics and considering both contextual factors and AWE implementation methods when evaluating AWE effectiveness.
引用
收藏
页码:87 / 125
页数:39
相关论文
共 95 条
[1]  
[Anonymous], 2013, COMM COR STAT STAND
[2]  
[Anonymous], 2004, CCCC POS STAT TEACH
[3]  
[Anonymous], 2016, 2016 IEEE International Conference on Power System Technology, DOI DOI 10.1093/acrefore/9780190228613.013.162
[4]  
Anson C., 2013, NCTE position statement on machine scoring: Machine scoring fails the test
[5]  
Archer A.L., 2011, EXPLICIT INSTRUCTION
[6]  
Balfour S.P., 2013, RES PRACTICE ASSESSM, V8, P40
[7]   An Exploration of Professional Knowledge Needed for Reading and Responding to Student Writing [J].
Ballock, Ellen ;
McQuitty, Vicki ;
McNary, Scot .
JOURNAL OF TEACHER EDUCATION, 2018, 69 (01) :56-68
[8]   Awaiting a new wave: The status of state writing assessment in the United States [J].
Behizadeh, Nadia ;
Pang, Myoung Eun .
ASSESSING WRITING, 2016, 29 :25-41
[9]   On the vulnerability of automated scoring to construct-irrelevant response strategies (CIRS): An illustration [J].
Bejar, Isaac I. ;
Flor, Michael ;
Futagi, Yoko ;
Ramineni, Chaintanya .
ASSESSING WRITING, 2014, 22 :48-59
[10]  
Bennett R.E., 2004, Research Memorandum RM-04-01