Unveiling the Impact of Large Language Models on Student Learning: A Comprehensive Case Study

被引:1
作者
Zdravkova, Katerina [1 ]
Dalipi, Fisnik [2 ]
Ahlgren, Fredrik [2 ]
Ilijoski, Bojan [1 ]
Ohlsson, Tobias [2 ]
机构
[1] Ss Cyril & Methodius Univ, Skopje, North Macedonia
[2] Linnaeus Univ, Vaxjo, Sweden
来源
2024 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE, EDUCON 2024 | 2024年
关键词
AI learning tool; ChatGPT; large language models; higher education; practical implementation;
D O I
10.1109/EDUCON60312.2024.10578855
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Large language models (LLMs) have achieved planetary popularity and have become accepted in higher education. On the basis of face-to-face interviews, a survey examining students' attitudes about the integration of LLM into education, and our own academic experience, we defined a realistic solution for creating assignments. It embraces essay writing as well as various aspects of computer programming. The experiments were carried out during the winter semester of academic 2023/24 at two universities from two different countries. This paper presents the experience gained in the creation of two different computer science assignments with and without the use of LLM. Comparative analysis refers on three approaches: traditional or manual assignment preparation without using any LLM; full reliance on LLMs; and a hybrid mode, depending on the amount of application of the LLM in the preparation of the assignment. The proposed solution was evaluated quantitatively, with the aim of becoming a benchmark for examining the integration of LLM studies into higher education. Findings reveal the importance of hybrid mode, as the most preferred approach among students.
引用
收藏
页数:8
相关论文
共 29 条
  • [1] Authors, 2023, intentionally hidden for blind reviewing and to maintain the integrity of the review process
  • [2] Baidoo-Anu D., 2023, J. AI, V7, P52, DOI [DOI 10.61969/JAI.1337500, 10.61969/jai.1337500]
  • [3] Bernabei M., 2023, Comput. Educ. Artif. Intell, V5, P100172, DOI DOI 10.1016/J.CAEAI.2023.100172
  • [4] Bewersdorff A., 2023, COMPUTERS ED ARTIFIC, V5
  • [5] Science in the age of large language models
    Birhane, Abeba
    Kasirzadeh, Atoosa
    Leslie, David
    Wachter, Sandra
    [J]. NATURE REVIEWS PHYSICS, 2023, 5 (05) : 277 - 280
  • [6] Bulqiyah S., 2021, English Language Teaching Educational Journal, V4, P61, DOI DOI 10.12928/ELTEJ.V4I1.2371
  • [7] ON UNDERSTANDING TYPES, DATA ABSTRACTION, AND POLYMORPHISM.
    Cardelli, Luca
    Wegner, Peter
    [J]. Computing surveys, 1985, 17 (04): : 471 - 522
  • [8] A comprehensive AI policy education framework for university teaching and learning
    Chan, Cecilia Ka Yuk
    [J]. INTERNATIONAL JOURNAL OF EDUCATIONAL TECHNOLOGY IN HIGHER EDUCATION, 2023, 20 (01)
  • [9] Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
  • [10] Fan LZ, 2023, Arxiv, DOI [arXiv:2304.02020, DOI 10.1145/3664930, 10.1145/3664930]