Automated Writing Evaluation in EFL Contexts: A Review of Effectiveness, Impact, and Pedagogical Implications

被引:1
作者
Aldosemani, Tahani [1 ]
Assalahi, Hussein [2 ]
Lhothali, Areej [3 ]
Albsisi, Maram [4 ]
机构
[1] Prince Sattam Bin Abdulaziz Univ, Educ Technol & Instruct Design, Al Kharj, Saudi Arabia
[2] King Abdulaziz Univ, English Language Inst, TESOL, Jeddah, Saudi Arabia
[3] King Abdulaziz Univ, Fac Comp Sci & Informat Technol, Jeddah, Saudi Arabia
[4] King Abdulaziz Univ, Comp Sci, Jeddah, Saudi Arabia
关键词
Automated Writing Evaluation; Classroom Strategies; English as a Foreign Language; Feedback Effectiveness; Feedback Quality; Reliability; Usefulness; FEEDBACK; QUALITY;
D O I
10.4018/IJCALLT.329962
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
This paper explores the literature on AWE feedback, particularly its perceived impact on enhancing EFL student writing proficiency. Prior research highlighted the contribution of AWE in fostering learner autonomy and alleviating teacher workloads, with a substantial focus on student engagement with AWE feedback. This review strives to illuminate these facets and offer critical insights on AWE effectiveness, feedback quality, reliability, and usefulness. Guided by the research questions, 16 studies were selected, adopting specific inclusion criteria to assess the effectiveness of AWE in enhancing EFL learner writing performance. Recommendations and implications from the reviewed articles regarding AWE implementation were synthesized and discussed. The review concludes that AWE can improve EFL student writing skills, with varying effectiveness based on student proficiency levels. AWE provides quality feedback and can be a reliable and valuable tool. However, despite its effectiveness, human intervention is essential to maximize its outcomes and mitigate limitations.
引用
收藏
页数:19
相关论文
共 55 条
  • [1] Alikaniotis D, 2016, PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, P715
  • [2] A Multi-Dimensional Analysis of Writing Flexibility in an Automated Writing Evaluation System
    Allen, Laura K.
    Likens, Aaron D.
    McNamara, Danielle S.
    [J]. PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON LEARNING ANALYTICS & KNOWLEDGE (LAK'18): TOWARDS USER-CENTRED LEARNING ANALYTICS, 2018, : 380 - 388
  • [3] Anderson T, 2022, THEORY PRACTICE ONLI
  • [4] In the face of fallible AWE feedback: how do students respond?
    Bai, Lifang
    Hu, Guangwei
    [J]. EDUCATIONAL PSYCHOLOGY, 2017, 37 (01) : 67 - 81
  • [5] Barrot J. S., 2021, COMPUT ASSIST LANG L, V34, P1
  • [6] Rethinking models of feedback for learning: the challenge of design
    Boud, David
    Molloy, Elizabeth
    [J]. ASSESSMENT & EVALUATION IN HIGHER EDUCATION, 2013, 38 (06) : 698 - 712
  • [7] Burstein F, 2004, AI MAG, V25, P27
  • [8] Validity arguments for diagnostic assessment using automated writing evaluation
    Chapelle, Carol A.
    Cotos, Elena
    Lee, Jooyoung
    [J]. LANGUAGE TESTING, 2015, 32 (03) : 385 - 405
  • [9] Chen CFE, 2008, LANG LEARN TECHNOL, V12, P94
  • [10] The impact of online automated feedback on students' reflective journal writing in an EFL course
    Cheng, Gary
    [J]. INTERNET AND HIGHER EDUCATION, 2017, 34 : 18 - 27