Harnessing LLMs for multi-dimensional writing assessment: Reliability and alignment with human judgments

被引:2
作者
Tang, Xiaoyi [1 ]
Chen, Hongwei [1 ]
Lin, Daoyu [2 ]
Li, Kexin [1 ]
机构
[1] Univ Sci & Technol Beijing, Sch Foreign Studies, Beijing 100083, Peoples R China
[2] Chinese Acad Sci, Aerosp Informat Res Inst, Beijing 100094, Peoples R China
关键词
Automated essay scoring (AES); Large language models (LLMs); Generative pre-trained transformer (GPT); Prompt engineering; Multi-dimensional writing assessment; LINGUISTIC FEATURES; QUALITY;
D O I
10.1016/j.heliyon.2024.e34262
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recent advancements in natural language processing, computational linguistics, and Artificial Intelligence (AI) have propelled the use of Large Language Models (LLMs) in Automated Essay Scoring (AES), offering efficient and unbiased writing assessment. This study assesses the reliability of LLMs in AES tasks, focusing on scoring consistency and alignment with human raters. We explore the impact of prompt engineering, temperature settings, and multi-level rating dimensions on the scoring performance of LLMs. Results indicate that prompt engineering significantly affects the reliability of LLMs, with GPT-4 showing marked improvement over GPT-3.5 and Claude 2, achieving 112% and 114% increase in scoring accuracy under the criteria and sample-referenced justification prompt. Temperature settings also influence the output consistency of LLMs, with lower temperatures producing scores more in line with human evaluations, which is essential for maintaining fairness in large-scale assessment. Regarding multidimensional writing assessment, results indicate that GPT-4 performs well in dimensions regarding Ideas (QWK=0.551) and Organization (QWK=0.584) under well-crafted prompt engineering. These findings pave the way for a comprehensive exploration of LLMs' broader educational implications, offering insights into their capability to refine and potentially transform writing instruction, assessment, and the delivery of diagnostic and personalized feedback in the AIpowered educational age. While this study attached importance to the reliability and alignment of LLM-powered multi-dimensional AES, future research should broaden its scope to encompass diverse writing genres and a more extensive sample from varied backgrounds.
引用
收藏
页数:18
相关论文
共 62 条
  • [1] 2023, Arxiv, DOI arXiv:2303.08774
  • [2] Explainable artificial intelligence: an analytical review
    Angelov, Plamen P.
    Soares, Eduardo A.
    Jiang, Richard
    Arnold, Nicholas I.
    Atkinson, Peter M.
    [J]. WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 11 (05)
  • [3] Attali Y, 2004, J. Technol. Learn. Assess., V2004, pi
  • [4] A crowdsourcing-based incremental learning framework for automated essays scoring
    Bai, Huanyu
    Hui, Siu Cheung
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [5] Burstein J, 2003, AUTOMATED ESSAY SCORING: A CROSS-DISCIPLINARY PERSPECTIVE, P113
  • [6] Chen BH, 2024, Arxiv, DOI [arXiv:2310.14735, 10.48550/ARXIV.2310.14735]
  • [7] Automated Essay Scoring by Capturing Relative Writing Quality
    Chen, Hongbo
    Xu, Jungang
    He, Ben
    [J]. COMPUTER JOURNAL, 2014, 57 (09) : 1318 - 1330
  • [8] Linguistic features in writing quality and development: An overview
    Crossley, Scott
    [J]. JOURNAL OF WRITING RESEARCH, 2020, 11 (03) : 415 - 443
  • [9] Using human judgments to examine the validity of automated grammar, syntax, and mechanical errors in writing
    Crossley, Scott A.
    Bradfield, Franklin
    Bustamante, Analynn
    [J]. JOURNAL OF WRITING RESEARCH, 2019, 11 (02) : 251 - 270
  • [10] The Tool for the Automatic Analysis of Cohesion 2.0: Integrating semantic similarity and text overlap
    Crossley, Scott A.
    Kyle, Kristopher
    Dascalu, Mihai
    [J]. BEHAVIOR RESEARCH METHODS, 2019, 51 (01) : 14 - 27