Using Large Language Models for Automated Grading of Student Writing about Science

被引:0
作者
Impey, Chris [1 ]
Wenger, Matthew [1 ]
Garuda, Nikhil [1 ]
Golchin, Shahriar [2 ]
Stamer, Sarah [1 ]
机构
[1] Univ Arizona, Dept Astron, Tucson, AZ 85721 USA
[2] Univ Arizona, Dept Comp Sci, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
Student writing; Science classes; Online education; Assessment; Machine learning; Large language models; ONLINE; ASTRONOMY; RATER;
D O I
10.1007/s40593-024-00453-7
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Assessing writing in large classes for formal or informal learners presents a significant challenge. Consequently, most large classes, particularly in science, rely on objective assessment tools such as multiple-choice quizzes, which have a single correct answer. The rapid development of AI has introduced the possibility of using large language models (LLMs) to evaluate student writing. An experiment was conducted using GPT-4 to determine if machine learning methods based on LLMs can match or exceed the reliability of instructor grading in evaluating short writing assignments on topics in astronomy. The audience consisted of adult learners in three massive open online courses (MOOCs) offered through Coursera. One course was on astronomy, the second was on astrobiology, and the third was on the history and philosophy of astronomy. The results should also be applicable to non-science majors in university settings, where the content and modes of evaluation are similar. The data comprised answers from 120 students to 12 questions across the three courses. GPT-4 was provided with total grades, model answers, and rubrics from an instructor for all three courses. In addition to evaluating how reliably the LLM reproduced instructor grades, the LLM was also tasked with generating its own rubrics. Overall, the LLM was more reliable than peer grading, both in aggregate and by individual student, and approximately matched instructor grades for all three online courses. The implication is that LLMs may soon be used for automated, reliable, and scalable grading of student science writing.
引用
收藏
页数:35
相关论文
共 74 条
  • [41] C-rater: Automated scoring of short-answer questions
    Leacock, C
    Chodorow, M
    [J]. COMPUTERS AND THE HUMANITIES, 2003, 37 (04): : 389 - 405
  • [42] Levene H., 1960, CONTRIBUTIONS PROBAB, P278, DOI DOI 10.2307/2285659
  • [43] The Utility of Writing Assignments in Undergraduate Bioscience
    Libarkin, Julie
    Ording, Gabriel
    [J]. CBE-LIFE SCIENCES EDUCATION, 2012, 11 (01): : 39 - 46
  • [44] Online learner engagement: Conceptual definitions, research themes, and supportive practices
    Martin, Florence
    Borup, Jered
    [J]. EDUCATIONAL PSYCHOLOGIST, 2022, 57 (03) : 162 - 177
  • [45] Writing in the STEM classroom: Faculty conceptions of writing and its role in the undergraduate classroom
    Moon, Alena
    Gere, Anne Ruggles
    Shultz, Ginger V.
    [J]. SCIENCE EDUCATION, 2018, 102 (05) : 1007 - 1028
  • [46] Morris W., 2023, LAK23 13 INT LEARN A, P315, DOI [10.1145/3576050.3576098, DOI 10.1145/3576050.3576098]
  • [47] What we learned from creating one of the world's most popular MOOCs
    Oakley, Barbara A.
    Sejnowski, Terrence J.
    [J]. NPJ SCIENCE OF LEARNING, 2019, 4 (01)
  • [48] Onah DFO, 2014, EDULEARN PROC, P5825
  • [49] OpenAI, 2023, OpenAI Customer Stories: Khan Academy
  • [50] Ouyang L, 2022, ADV NEUR IN