Large Language Models (GPT) for automating feedback on programming assignments

被引:0
作者
Pankiewicz, Maciej [1 ]
Baker, Ryan S. [2 ]
机构
[1] Warsaw Univ Life Sci, Inst Informat Technol, Warsaw, Poland
[2] Univ Penn, Penn Ctr Learning Analyt, Philadelphia, PA USA
来源
31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I | 2023年
关键词
Programming; automated assessment tools; automated feedback; LLM; GPT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Addressing the challenge of generating personalized feedback for programming assignments is demanding due to several factors, like the complexity of code syntax or different ways to correctly solve a task. In this experimental study, we automated the process of feedback generation by employing OpenAI's GPT-3.5 model to generate personalized hints for students solving programming assignments on an automated assessment platform. Students rated the usefulness of GPT-generated hints positively. The experimental group (with GPT hints enabled) relied less on the platform's regular feedback but performed better in terms of percentage of successful submissions across consecutive attempts for tasks, where GPT hints were enabled. For tasks where the GPT feedback was made unavailable, the experimental group needed significantly less time to solve assignments. Furthermore, when GPT hints were unavailable, students in the experimental condition were initially less likely to solve the assignment correctly. This suggests potential over-reliance on GPT-generated feedback. However, students in the experimental condition were able to correct reasonably rapidly, reaching the same percentage correct after seven submission attempts. The availability of GPT hints did not significantly impact students' affective state.
引用
收藏
页码:68 / 77
页数:10
相关论文
共 50 条
[21]   Improving Effectiveness of Programming Assignments with Real-Time Formative Feedback [J].
Baimetov, Ilya .
PROCEEDINGS OF THE 2023 CONFERENCE ON INNOVATION AND TECHNOLOGY IN COMPUTER SCIENCE EDUCATION, ITICSE 2023, VOL. 2, 2023, :627-628
[22]   Towards understanding the effective design of automated formative feedback for programming assignments [J].
Hao, Qiang ;
Smith, David H. ;
Ding, Lu ;
Ko, Amy ;
Ottaway, Camille ;
Wilson, Jack ;
Arakawa, Kai H. ;
Turcan, Alistair ;
Poehlman, Timothy ;
Greer, Tyler .
COMPUTER SCIENCE EDUCATION, 2022, 32 (01) :105-127
[23]   Fine-Tuning Large Language Models for Better Programming Error Explanations [J].
Vassar, Alexandra ;
Renzella, Jake ;
Ross, Emily ;
Taylor, Andrew .
PROCEEDINGS OF 24TH INTERNATIONAL CONFERENCE ON COMPUTING EDUCATION RESEARCH, KOLI CALLING 2024, 2024,
[24]   Generating and Reviewing Programming Codes with Large Language Models A Systematic Mapping Study [J].
Lins de Albuquerque, Beatriz Ventorini ;
Souza da Cunha, Antonio Fernando ;
Souza, Leonardo ;
Matsui Siqueira, Sean Wolfgand ;
dos Santos, Rodrigo Pereira .
PROCEEDINGS OF THE 20TH BRAZILIAN SYMPOSIUM ON INFORMATIONS SYSTEMS, SBSI 2024, 2024,
[25]   Transforming online learning research: Leveraging GPT large language models for automated content analysis of cognitive presence [J].
Castellanos-Reyes, Daniela ;
Olesova, Larisa ;
Sadaf, Ayesha .
INTERNET AND HIGHER EDUCATION, 2025, 65
[26]   Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT [J].
Dai, Wei ;
Lin, Jionghao ;
Jin, Hua ;
Li, Tongguang ;
Tsai, Yi-Shan ;
Gasevic, Dragan ;
Chen, Guanliang .
2023 IEEE INTERNATIONAL CONFERENCE ON ADVANCED LEARNING TECHNOLOGIES, ICALT, 2023, :323-325
[27]   Performance evaluation of tokenizers in large language models for the Assamese language [J].
Sagar Tamang ;
Dibya Jyoti Bora .
International Journal of Information Technology, 2025, 17 (4) :2329-2332
[28]   Next-Step Hint Generation for Introductory Programming Using Large Language Models [J].
Roest, Lianne ;
Keuning, Hieke ;
Jeuring, Johan .
PROCEEDINGS OF THE 26TH AUSTRALASIAN COMPUTING EDUCATION CONFERENCE, ACE 2024, 2024, :144-153
[29]   Empathy-GPT: Leveraging Large Language Models to Enhance Emotional Empathy and User Engagement in Embodied Conversational Agents [J].
Shih, Meng Ting ;
Hsu, Ming Yun ;
Lee, Sheng Cian .
PROCEEDINGS OF THE 37TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, UIST ADJUNCT 2024, 2024,
[30]   Performance of Recent Large Language Models for a Low-Resourced Language [J].
Jayakody, Ravindu ;
Dias, Gihan .
2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, :162-167