Computer or human: a comparative study of automated evaluation scoring and instructors' feedback on Chinese college students' English writing

被引:9
作者
Chen, Huimei [1 ,2 ]
Pan, Jie [1 ,2 ]
机构
[1] No Arizona Univ, Coll Educ, Flagstaff, AZ 86011 USA
[2] Shanghai Normal Univ, Tianhua Coll, Shanghai, Peoples R China
关键词
EFL writing; AES system; Human rater; Higher education; WRITTEN CORRECTIVE FEEDBACK; REVISITING TEACHER FEEDBACK; FOREIGN-LANGUAGE; SYSTEM;
D O I
10.1186/s40862-022-00171-4
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
The role of internet technology in higher education and particularly in teaching English as a Foreign language is increasingly prominent because of the interest in the ways in which technology can be applied to support students. The automated evaluation scoring system is a typical demonstration of the application of network technology in the teaching of English writing. Many writing scoring platforms have been developed and used in China, which can provide online instant and corrective feedback on students' writing. However, the validity of Aim Writing, a product developed by Microsoft Research Asia, which claims to be the best tool to facilitate Chinese EFL learners, has not been tested in previous studies. In this mixed methods study, the feedback and effect of Aim Writing on college students' writing will be investigated and compared to the instructor's feedback. The results indicate that Aim Writing's performance is insufficient to support all students' needs for writing and that colleges should encourage a hybrid model that contains both AES and instructor's feedback in writing.
引用
收藏
页数:20
相关论文
共 77 条
  • [61] Shermis M., 2016, Handbook of writing research, P395
  • [62] Shermis M.D., 2013, HDB AUTOMATED ESSAY, P1, DOI [10.4324/9780203122761, DOI 10.4324/9780203122761]
  • [63] The effects of an AWE-aided assessment approach on business English writing performance and writing anxiety: A contextual consideration
    Sun, Bo
    Fan, Tingting
    [J]. STUDIES IN EDUCATIONAL EVALUATION, 2022, 72
  • [64] Automated Assessment of Non-Native Learner Essays: Investigating the Role of Linguistic Features
    Vajjala, Sowmya
    [J]. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2018, 28 (01) : 79 - 105
  • [65] Vygotsky L. S., 1978, Mind in society: The development of higher psychological processes, DOI 10.2307/j.ctvjf9vz4
  • [66] A Comparative Study on the Influence of Automated Evaluation System and Teacher Grading on Students' English Writing
    Wang, Feiwen
    Wang, Shuwen
    [J]. 2012 INTERNATIONAL WORKSHOP ON INFORMATION AND ELECTRONICS ENGINEERING, 2012, 29 : 993 - 997
  • [67] Wang Yan, 2019, Journal of Physics: Conference Series, V1237, DOI 10.1088/1742-6596/1237/4/042002
  • [68] Elementary teachers' perceptions of automated feedback and automated scoring: Transforming the teaching and learning of writing using automated writing evaluation
    Wilson, Joshua
    Ahrendt, Cristina
    Fudge, Emily A.
    Raiche, Alexandria
    Beard, Gaysha
    MacArthur, Charles
    [J]. COMPUTERS & EDUCATION, 2021, 168
  • [69] Automated Writing Evaluation and Feedback: Multiple Metrics of Efficacy
    Wilson, Joshua
    Roscoe, Rod D.
    [J]. JOURNAL OF EDUCATIONAL COMPUTING RESEARCH, 2020, 58 (01) : 87 - 125
  • [70] Yang Y., 2016, English Language Teaching, V9, P36, DOI [10.5539/elt.v9n9p36, DOI 10.5539/ELT.V9N9P36, 10.5539/elt.v9n9p3624, DOI 10.5539/ELT.V9N9P3624]