Automated writing evaluation (AWE) feedback: a systematic investigation of college students' acceptance

被引:51
作者
Zhai, Na [1 ,2 ]
Ma, Xiaomei [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Foreign Languages, Xian 710049, Peoples R China
[2] Xian Fanyi Univ, Sch Translat Studies, Xian, Peoples R China
关键词
automated writing evaluation; college students; feedback; structural equation modeling; technology acceptance model;
D O I
10.1080/09588221.2021.1897019
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Automated writing evaluation (AWE) has been used increasingly to provide feedback on student writing. Previous research typically focused on its inter-rater reliability with human graders and validation frameworks. The limited body of research has only discussed students' attitudes or perceptions in general. A systematic investigation of the driving factors contributing to students' acceptance is still lacking. This study proposes an extended technology acceptance model (TAM) to identify the environmental, individual, educational, and systemic factors that influence college students' acceptance of AWE feedback and examine how they affect college students' usage intention. Structural equation modeling (SEM) was used to analyze the quantitative survey data from 448 Chinese college students who had used AWE feedback for at least one semester. Results revealed that students' behavioral intention to use AWE feedback was affected by the subjective norm, facilitating conditions, perceived trust, AWE self-efficacy, cognitive feedback, and system characteristics. Among them, subjective norm, perceived trust, and cognitive feedback positively influenced perceived usefulness; facilitating conditions, AWE self-efficacy, and system characteristics were significant determinants of perceived ease of use; anxiety played no role for experienced users. Implications from these findings to AWE developers and practitioners are further elaborated.
引用
收藏
页码:2817 / 2842
页数:26
相关论文
共 73 条
[41]   Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction [J].
Li, Jinrong ;
Link, Stephanie ;
Hegelheimer, Volker .
JOURNAL OF SECOND LANGUAGE WRITING, 2015, 27 :1-18
[42]   Examining EFL learners? individual antecedents on the adoption of automated writing evaluation in China [J].
Li, Rui ;
Meng, Zhaokun ;
Tian, Mi ;
Zhang, Zhiyi ;
Ni, Chuanbin ;
Xiao, Wei .
COMPUTER ASSISTED LANGUAGE LEARNING, 2019, 32 (07) :784-804
[43]   Using automated writing evaluation to reduce grammar errors in writing [J].
Liao, Hui-Chuan .
ELT JOURNAL, 2016, 70 (03) :308-319
[44]  
Liu IF, 2015, EDUC TECHNOL SOC, V18, P153
[45]   Learning from feedback: The neural mechanisms of feedback processing facilitating better performance [J].
Luft, Caroline Di Bernardi .
BEHAVIOURAL BRAIN RESEARCH, 2014, 261 :356-368
[46]   Toward an Understanding of Preservice English as a Foreign Language Teachers' Acceptance of Computer-Assisted Language Learning 2.0 in the People's Republic of China [J].
Mei, Bing ;
Brown, Gavin T. L. ;
Teo, Timothy .
JOURNAL OF EDUCATIONAL COMPUTING RESEARCH, 2018, 56 (01) :74-104
[47]  
Mohsen M.A., 2019, TEACHING ENGLISH TEC, V19
[48]   Student Use of Automated Essay Evaluation Technology During Revision [J].
Moore, Noreen S. ;
MacArthur, Charles A. .
JOURNAL OF WRITING RESEARCH, 2016, 8 (01) :149-175
[49]   Mobile-based assessment: Investigating the factors that influence behavioral intention to use [J].
Nikou, Stavros A. ;
Economides, Anastasios A. .
COMPUTERS & EDUCATION, 2017, 109 :56-73
[50]   How system quality influences mobile BI use: The mediating role of engagement [J].
Peters, Twan ;
Isik, Oyku ;
Tona, Olgerta ;
Popovic, Ales .
INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2016, 36 (05) :773-783