An NLP-Based Exploration of Variance in Student Writing and Syntax: Implications for Automated Writing Evaluation

被引:0
|
作者
Goldshtein, Maria [1 ]
Alhashim, Amin G. [2 ]
Roscoe, Rod D. [1 ]
机构
[1] Arizona State Univ, Human Syst Engn, Mesa, AZ 85212 USA
[2] Macalester Coll, Math Stat & Comp Sci, St Paul, MN 55105 USA
关键词
automated writing evaluation; natural language processing; student writing variability; syntax; writing styles; EXPLORING MULTIPLE PROFILES; LINGUISTIC FEATURES; LANGUAGE; FEEDBACK; COMPLEXITY; QUALITY; SOPHISTICATION; PERFORMANCE; SUPPORT; SKILLS;
D O I
10.3390/computers13070160
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In writing assessment, expert human evaluators ideally judge individual essays with attention to variance among writers' syntactic patterns. There are many ways to compose text successfully or less successfully. For automated writing evaluation (AWE) systems to provide accurate assessment and relevant feedback, they must be able to consider similar kinds of variance. The current study employed natural language processing (NLP) to explore variance in syntactic complexity and sophistication across clusters characterized in a large corpus (n = 36,207) of middle school and high school argumentative essays. Using NLP tools, k-means clustering, and discriminant function analysis (DFA), we observed that student writers employed four distinct syntactic patterns: (1) familiar and descriptive language, (2) consistently simple noun phrases, (3) variably complex noun phrases, and (4) moderate complexity with less familiar language. Importantly, each pattern spanned the full range of writing quality; there were no syntactic patterns consistently evaluated as "good" or "bad". These findings support the need for nuanced approaches in automated writing assessment while informing ways that AWE can participate in that process. Future AWE research can and should explore similar variability across other detectable elements of writing (e.g., vocabulary, cohesion, discursive cues, and sentiment) via diverse modeling methods.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Impact of automated writing evaluation on teacher feedback, student revision, and writing improvement
    Link, Stephanie
    Mehrzad, Mohaddeseh
    Rahimi, Mohammad
    COMPUTER ASSISTED LANGUAGE LEARNING, 2022, 35 (04) : 605 - 634
  • [2] Exploring the Influence of Automated Writing Evaluation on Teacher Feedback and Student Writing Performance
    Jung, Pyung-hwa
    Lee, Hee-Kyung
    JOURNAL OF ASIA TEFL, 2024, 21 (03): : 553 - 569
  • [3] Micro- and Macro-Level Features of NLP-Based Writing Tools in Higher Education
    Burkhard, Michael
    Seufert, Sabine
    Panjaburee, Patcharin
    Pichitpornchai, Chailerd
    Niklaus, Christina
    30TH INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2022, VOL 1, 2022, : 50 - 55
  • [4] On the Evaluation of NLP-based Models for Software Engineering
    Izadi, Maliheh
    Ahmadabadi, Matin Nili
    2022 IEEE/ACM 1ST INTERNATIONAL WORKSHOP ON NATURAL LANGUAGE-BASED SOFTWARE ENGINEERING (NLBSE 2022), 2022, : 48 - 50
  • [5] Automated writing evaluation
    Hockly, Nicky
    ELT JOURNAL, 2019, 73 (01) : 82 - 88
  • [6] Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions
    Zhang, Zhe
    ASSESSING WRITING, 2020, 43 : 78 - 91
  • [7] The role of digital literacy in student engagement with automated writing evaluation (AWE) feedback on second language writing
    Zhang, Zhe
    Hyland, Ken
    COMPUTER ASSISTED LANGUAGE LEARNING, 2023,
  • [8] Automated Writing Evaluation Tools in the Improvement of the Writing Skill
    Parra G, Lorena
    Calero S, Ximena
    INTERNATIONAL JOURNAL OF INSTRUCTION, 2019, 12 (02) : 209 - 226
  • [9] ChatGPT for Automated Writing Evaluation in Scholarly Writing Instruction
    Parker, Jessica L.
    Becker, Kimberly
    Carroca, Catherine
    JOURNAL OF NURSING EDUCATION, 2023, 62 (12) : 721 - 727
  • [10] Presentation, expectations, and experience: Sources of student perceptions of automated writing evaluation
    Roscoe, Rod D.
    Wilson, Joshua
    Johnson, Adam C.
    Mayra, Christopher R.
    COMPUTERS IN HUMAN BEHAVIOR, 2017, 70 : 207 - 221