Large Language Models (GPT) for automating feedback on programming assignments

被引:0
|
作者
Pankiewicz, Maciej [1 ]
Baker, Ryan S. [2 ]
机构
[1] Warsaw Univ Life Sci, Inst Informat Technol, Warsaw, Poland
[2] Univ Penn, Penn Ctr Learning Analyt, Philadelphia, PA USA
来源
31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I | 2023年
关键词
Programming; automated assessment tools; automated feedback; LLM; GPT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Addressing the challenge of generating personalized feedback for programming assignments is demanding due to several factors, like the complexity of code syntax or different ways to correctly solve a task. In this experimental study, we automated the process of feedback generation by employing OpenAI's GPT-3.5 model to generate personalized hints for students solving programming assignments on an automated assessment platform. Students rated the usefulness of GPT-generated hints positively. The experimental group (with GPT hints enabled) relied less on the platform's regular feedback but performed better in terms of percentage of successful submissions across consecutive attempts for tasks, where GPT hints were enabled. For tasks where the GPT feedback was made unavailable, the experimental group needed significantly less time to solve assignments. Furthermore, when GPT hints were unavailable, students in the experimental condition were initially less likely to solve the assignment correctly. This suggests potential over-reliance on GPT-generated feedback. However, students in the experimental condition were able to correct reasonably rapidly, reaching the same percentage correct after seven submission attempts. The availability of GPT hints did not significantly impact students' affective state.
引用
收藏
页码:68 / 77
页数:10
相关论文
共 50 条
  • [1] Propagating Large Language Models Programming Feedback
    Koutcheme, Charles
    Hellas, Arto
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON LEARNING@SCALE, L@S 2024, 2024, : 366 - 370
  • [2] Automating Autograding: Large Language Models as Test Suite Generators for Introductory Programming
    Alkafaween, Umar
    Albluwi, Ibrahim
    Denny, Paul
    JOURNAL OF COMPUTER ASSISTED LEARNING, 2025, 41 (01)
  • [3] BeGrading: large language models for enhanced feedback in programming education
    Mina Yousef
    Kareem Mohamed
    Walaa Medhat
    Ensaf Hussein Mohamed
    Ghada Khoriba
    Tamer Arafa
    Neural Computing and Applications, 2025, 37 (2) : 1027 - 1040
  • [4] Applying Large Language Models to Enhance the Assessment of Parallel Functional Programming Assignments
    Grandel, Skyler
    Schmidt, Douglas C.
    Leach, Kevin
    2024 INTERNATIONAL WORKSHOP ON LARGE LANGUAGE MODELS FOR CODE, LLM4CODE 2024, 2024, : 102 - 110
  • [5] Evaluating the Application of Large Language Models to Generate Feedback in Programming Education
    Jacobs, Sven
    Jaschke, Steffen
    2024 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE, EDUCON 2024, 2024,
  • [6] Hands-on analysis of using large language models for the auto evaluation of programming assignments
    Mohamed, Kareem
    Yousef, Mina
    Medhat, Walaa
    Mohamed, Ensaf Hussein
    Khoriba, Ghada
    Arafa, Tamer
    INFORMATION SYSTEMS, 2025, 128
  • [8] Robustness of GPT Large Language Models on Natural Language Processing Tasks
    Xuanting C.
    Junjie Y.
    Can Z.
    Nuo X.
    Tao G.
    Qi Z.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (05): : 1128 - 1142
  • [9] Large language models for automating clinical trial matching
    Layne, Ethan
    Olivas, Claire
    Hershenhouse, Jacob
    Ganjavi, Conner
    Cei, Francesco
    Gill, Inderbir
    Cacciamani, Giovanni E.
    CURRENT OPINION IN UROLOGY, 2025, 35 (03) : 250 - 258
  • [10] UnrealMentor GPT: A System for Teaching Programming Based on a Large Language Model
    Zhu, Hongli
    Xiang, Jian
    Yang, Zhichuang
    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, 2025, 33 (03)