Automated writing evaluation (AWE) feedback: a systematic investigation of college students' acceptance

被引:55
作者
Zhai, Na [1 ,2 ]
Ma, Xiaomei [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Foreign Languages, Xian 710049, Peoples R China
[2] Xian Fanyi Univ, Sch Translat Studies, Xian, Peoples R China
关键词
automated writing evaluation; college students; feedback; structural equation modeling; technology acceptance model;
D O I
10.1080/09588221.2021.1897019
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Automated writing evaluation (AWE) has been used increasingly to provide feedback on student writing. Previous research typically focused on its inter-rater reliability with human graders and validation frameworks. The limited body of research has only discussed students' attitudes or perceptions in general. A systematic investigation of the driving factors contributing to students' acceptance is still lacking. This study proposes an extended technology acceptance model (TAM) to identify the environmental, individual, educational, and systemic factors that influence college students' acceptance of AWE feedback and examine how they affect college students' usage intention. Structural equation modeling (SEM) was used to analyze the quantitative survey data from 448 Chinese college students who had used AWE feedback for at least one semester. Results revealed that students' behavioral intention to use AWE feedback was affected by the subjective norm, facilitating conditions, perceived trust, AWE self-efficacy, cognitive feedback, and system characteristics. Among them, subjective norm, perceived trust, and cognitive feedback positively influenced perceived usefulness; facilitating conditions, AWE self-efficacy, and system characteristics were significant determinants of perceived ease of use; anxiety played no role for experienced users. Implications from these findings to AWE developers and practitioners are further elaborated.
引用
收藏
页码:2817 / 2842
页数:26
相关论文
共 73 条
[51]   The influence of system characteristics on e-learning use [J].
Pituch, Keenan A. ;
Lee, Yao-kuei .
COMPUTERS & EDUCATION, 2006, 47 (02) :222-244
[52]   Automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation [J].
Ranalli, Jim ;
Link, Stephanie ;
Chukharev-Hudilainen, Evgeny .
EDUCATIONAL PSYCHOLOGY, 2017, 37 (01) :8-25
[53]   Presentation, expectations, and experience: Sources of student perceptions of automated writing evaluation [J].
Roscoe, Rod D. ;
Wilson, Joshua ;
Johnson, Adam C. ;
Mayra, Christopher R. .
COMPUTERS IN HUMAN BEHAVIOR, 2017, 70 :207-221
[54]   State-of-the-art automated essay scoring: Competition, results, and future directions from a United States demonstration [J].
Shermis, Mark D. .
ASSESSING WRITING, 2014, 20 :53-76
[55]   The effects of integrating mobile devices with teaching and learning on students' learning performance: A meta-analysis and research synthesis [J].
Sung, Yao-Ting ;
Chang, Kuo-En ;
Liu, Tzu-Chien .
COMPUTERS & EDUCATION, 2016, 94 :252-275
[56]  
Tang JL, 2017, INT J COMPUT-ASSIST, V7, P58, DOI 10.4018/IJCALLT.2017040104
[57]   An empirical examination of individual traits as antecedents to computer anxiety and computer self-efficacy [J].
Thatcher, JB ;
Perrewé, PL .
MIS QUARTERLY, 2002, 26 (04) :381-396
[58]  
Tondeur J, 2017, ETR&D-EDUC TECH RES, V65, P555, DOI 10.1007/s11423-016-9481-2
[59]  
Uden L., 2007, International Journal of Mobile Learning and Organisation, V1, P81, DOI 10.1504/IJMLO.2007.011190
[60]  
Van Merrienboer J. J. G., 2018, 10 STEPS COMPLEX LEA, DOI DOI 10.4324/9781315113210