Large language models meet user interfaces: The case of provisioning feedback

被引:0
作者
Pozdniakov, Stanislav [1 ]
Brazil, Jonathan [2 ]
Abdi, Solmaz [1 ]
Bakharia, Aneesha [1 ]
Sadiq, Shazia [1 ]
Gašević, Dragan [3 ]
Denny, Paul [4 ]
Khosravi, Hassan [1 ]
机构
[1] School of Electrical Engineering and Computer Science, The University of Queensland, St Lucia, 4072, QLD
[2] Institute for Teaching and Learning Innovation, The University of Queensland, St Lucia, 4072, QLD
[3] Centre for Learning Analytics, Faculty of Information Technology, Monash University, Melbourne, 3800, VIC
[4] School of Computer Science, University of Auckland, 38 Princes Street, Auckland
来源
Computers and Education: Artificial Intelligence | 2024年 / 7卷
基金
澳大利亚研究理事会;
关键词
Artificial intelligence; Feedback; Generative artificial intelligence; Interfaces; Large language models; Learning analytics;
D O I
10.1016/j.caeai.2024.100289
中图分类号
学科分类号
摘要
Incorporating Generative Artificial Intelligence (GenAI), especially Large Language Models (LLMs), into educational settings presents valuable opportunities to boost the efficiency of educators and enrich the learning experiences of students. A significant portion of the current use of LLMs by educators has involved using conversational user interfaces (CUIs), such as chat windows, for functions like generating educational materials or offering feedback to learners. The ability to engage in real-time conversations with LLMs, which can enhance educators' domain knowledge across various subjects, has been of high value. However, it also presents challenges to LLMs' widespread, ethical, and effective adoption. Firstly, educators must have a degree of expertise, including tool familiarity, AI literacy and prompting to effectively use CUIs, which can be a barrier to adoption. Secondly, the open-ended design of CUIs makes them exceptionally powerful, which raises ethical concerns, particularly when used for high-stakes decisions like grading. Additionally, there are risks related to privacy and intellectual property, stemming from the potential unauthorised sharing of sensitive information. Finally, CUIs are designed for short, synchronous interactions and often struggle and hallucinate when given complex, multi-step tasks (e.g., providing individual feedback based on a rubric on a large scale). To address these challenges, we explored the benefits of transitioning away from employing LLMs via CUIs to the creation of applications with user-friendly interfaces that leverage LLMs through API calls. We first propose a framework for pedagogically sound and ethically responsible incorporation of GenAI into educational tools, emphasizing a human-centred design. We then illustrate the application of our framework to the design and implementation of a novel tool called Feedback Copilot, which enables instructors to provide students with personalized qualitative feedback on their assignments in classes of any size. An evaluation involving the generation of feedback from two distinct variations of the Feedback Copilot tool, using numerically graded assignments from 338 students, demonstrates the viability and effectiveness of our approach. Our findings have significant implications for GenAI application researchers, educators seeking to leverage accessible GenAI tools, and educational technologists aiming to transcend the limitations of conversational AI interfaces, thereby charting a course for the future of GenAI in education. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [31] Assessing the proficiency of large language models in automatic feedback generation: An evaluation study
    Dai, Wei
    Tsai, Yi-Shan
    Lin, Jionghao
    Aldino, Ahmad
    Jin, Hua
    Li, Tongguang
    Gašević, Dragan
    Chen, Guanliang
    Computers and Education: Artificial Intelligence, 2024, 7
  • [32] Investigating the Proficiency of Large Language Models in Formative Feedback Generation for Student Programmers
    Kumar, Smitha S.
    Lones, Michael Adam
    Maarek, Manuel
    Zantout, Hind
    2024 INTERNATIONAL WORKSHOP ON LARGE LANGUAGE MODELS FOR CODE, LLM4CODE 2024, 2024, : 88 - 93
  • [33] AI providers as criminal essay mills? Large language models meet contract cheating law
    Gaumann, Noelle
    Veale, Michael
    INFORMATION & COMMUNICATIONS TECHNOLOGY LAW, 2024, 33 (03) : 276 - 309
  • [34] Making Large Language Models More Reliable and Beneficial: Taking ChatGPT as a Case Study
    Majeed, Abdul
    Hwang, Seong Oun
    COMPUTER, 2024, 57 (03) : 101 - 106
  • [35] The Language of Creativity: Evidence from Humans and Large Language Models
    Orwig, William
    Edenbaum, Emma R.
    Greene, Joshua D.
    Schacter, Daniel L.
    JOURNAL OF CREATIVE BEHAVIOR, 2024, 58 (01) : 128 - 136
  • [36] Designing Effective Feedback of Electricity Consumption for Mobile User Interfaces
    Jacucci, Giulio
    Spagnolli, Anna
    Gamberini, Luciano
    Chalambalakis, Alessandro
    Bjorksog, Christoffer
    Bertoncini, Massimo
    Torstensson, Carin
    Monti, Pasquale
    PSYCHNOLOGY JOURNAL, 2009, 7 (03): : 265 - 289
  • [37] Large Language Models Demonstrate the Potential of Statistical Learning in Language
    Contreras Kallens, Pablo
    Kristensen-McLachlan, Ross Deans
    Christiansen, Morten H.
    COGNITIVE SCIENCE, 2023, 47 (03) : e13256
  • [38] Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models
    Kim, Woo-Seok
    Lim, Seongho
    Kim, Gun-Woo
    Choi, Sang-Min
    MATHEMATICS, 2025, 13 (02)
  • [39] An assessment of large language models for OpenMP-based code parallelization: a user perspective
    Misic, Marko
    Dodovic, Matija
    JOURNAL OF BIG DATA, 2024, 11 (01)
  • [40] Prompting large language models for user simulation in task-oriented dialogue systems
    Algherairy, Atheer
    Ahmed, Moataz
    COMPUTER SPEECH AND LANGUAGE, 2025, 89