Large Language Models (GPT) for automating feedback on programming assignments

被引:0
作者
Pankiewicz, Maciej [1 ]
Baker, Ryan S. [2 ]
机构
[1] Warsaw Univ Life Sci, Inst Informat Technol, Warsaw, Poland
[2] Univ Penn, Penn Ctr Learning Analyt, Philadelphia, PA USA
来源
31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I | 2023年
关键词
Programming; automated assessment tools; automated feedback; LLM; GPT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Addressing the challenge of generating personalized feedback for programming assignments is demanding due to several factors, like the complexity of code syntax or different ways to correctly solve a task. In this experimental study, we automated the process of feedback generation by employing OpenAI's GPT-3.5 model to generate personalized hints for students solving programming assignments on an automated assessment platform. Students rated the usefulness of GPT-generated hints positively. The experimental group (with GPT hints enabled) relied less on the platform's regular feedback but performed better in terms of percentage of successful submissions across consecutive attempts for tasks, where GPT hints were enabled. For tasks where the GPT feedback was made unavailable, the experimental group needed significantly less time to solve assignments. Furthermore, when GPT hints were unavailable, students in the experimental condition were initially less likely to solve the assignment correctly. This suggests potential over-reliance on GPT-generated feedback. However, students in the experimental condition were able to correct reasonably rapidly, reaching the same percentage correct after seven submission attempts. The availability of GPT hints did not significantly impact students' affective state.
引用
收藏
页码:68 / 77
页数:10
相关论文
共 50 条
[41]   Automatic Generation of SBML Kinetic Models from Natural Language Texts Using GPT [J].
Maeda, Kazuhiro ;
Kurata, Hiroyuki .
INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES, 2023, 24 (08)
[42]   Cluster and Sentiment Analyses of YouTube Textual Feedback of Programming Language Learners to Enhance Learning in Programming [J].
Bringula, Rex P. ;
Noel Victorino, John ;
De Leon, Marlene M. ;
Estuar, Regina .
PROCEEDINGS OF THE FUTURE TECHNOLOGIES CONFERENCE (FTC) 2019, VOL 2, 2020, 1070 :913-924
[43]   Benchmarking Large Language Models for Log Analysis, Security, and Interpretation [J].
Karlsen, Egil ;
Luo, Xiao ;
Zincir-Heywood, Nur ;
Heywood, Malcolm .
JOURNAL OF NETWORK AND SYSTEMS MANAGEMENT, 2024, 32 (03)
[44]   Software Testing With Large Language Models: Survey, Landscape, and Vision [J].
Wang, Junjie ;
Huang, Yuchao ;
Chen, Chunyang ;
Liu, Zhe ;
Wang, Song ;
Wang, Qing .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (04) :911-936
[45]   Examining the Role of Large Language Models in Orthopedics:Systematic Review [J].
Zhang, Cheng ;
Liu, Shanshan ;
Zhou, Xingyu ;
Zhou, Siyu ;
Tian, Yinglun ;
Wang, Shenglin ;
Xu, Nanfang ;
Li, Weishi .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26
[46]   Large language models: assessment for singularity [J].
Ishizaki, Ryunosuke ;
Sugiyama, Mahito .
AI & SOCIETY, 2025,
[47]   Large Language Models: Trust andRegulation [J].
Banks, David ;
Bosone, Costanza ;
Carpenter, Bob ;
Shah, Tarak ;
Shi, Claudia .
HARVARD DATA SCIENCE REVIEW, 2024, 6 (03)
[48]   (Security) Assertions by Large Language Models [J].
Kande, Rahul ;
Pearce, Hammond ;
Tan, Benjamin ;
Dolan-Gavitt, Brendan ;
Thakur, Shailja ;
Karri, Ramesh ;
Rajendran, Jeyavijayan .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 :4374-4389
[49]   Applications of Large Language Models in Pathology [J].
Cheng, Jerome .
BIOENGINEERING-BASEL, 2024, 11 (04)
[50]   Attention heads of large language models [J].
Zheng, Zifan ;
Wang, Yezhaohui ;
Huang, Yuxin ;
Song, Shichao ;
Yang, Mingchuan ;
Tang, Bo ;
Xiong, Feiyu ;
Li, Zhiyu .
PATTERNS, 2025, 6 (02)