Large language models meet user interfaces: The case of provisioning feedback

被引:0
|
作者
Pozdniakov, Stanislav [1 ]
Brazil, Jonathan [2 ]
Abdi, Solmaz [1 ]
Bakharia, Aneesha [1 ]
Sadiq, Shazia [1 ]
Gašević, Dragan [3 ]
Denny, Paul [4 ]
Khosravi, Hassan [1 ]
机构
[1] School of Electrical Engineering and Computer Science, The University of Queensland, St Lucia, 4072, QLD
[2] Institute for Teaching and Learning Innovation, The University of Queensland, St Lucia, 4072, QLD
[3] Centre for Learning Analytics, Faculty of Information Technology, Monash University, Melbourne, 3800, VIC
[4] School of Computer Science, University of Auckland, 38 Princes Street, Auckland
来源
Computers and Education: Artificial Intelligence | 2024年 / 7卷
基金
澳大利亚研究理事会;
关键词
Artificial intelligence; Feedback; Generative artificial intelligence; Interfaces; Large language models; Learning analytics;
D O I
10.1016/j.caeai.2024.100289
中图分类号
学科分类号
摘要
Incorporating Generative Artificial Intelligence (GenAI), especially Large Language Models (LLMs), into educational settings presents valuable opportunities to boost the efficiency of educators and enrich the learning experiences of students. A significant portion of the current use of LLMs by educators has involved using conversational user interfaces (CUIs), such as chat windows, for functions like generating educational materials or offering feedback to learners. The ability to engage in real-time conversations with LLMs, which can enhance educators' domain knowledge across various subjects, has been of high value. However, it also presents challenges to LLMs' widespread, ethical, and effective adoption. Firstly, educators must have a degree of expertise, including tool familiarity, AI literacy and prompting to effectively use CUIs, which can be a barrier to adoption. Secondly, the open-ended design of CUIs makes them exceptionally powerful, which raises ethical concerns, particularly when used for high-stakes decisions like grading. Additionally, there are risks related to privacy and intellectual property, stemming from the potential unauthorised sharing of sensitive information. Finally, CUIs are designed for short, synchronous interactions and often struggle and hallucinate when given complex, multi-step tasks (e.g., providing individual feedback based on a rubric on a large scale). To address these challenges, we explored the benefits of transitioning away from employing LLMs via CUIs to the creation of applications with user-friendly interfaces that leverage LLMs through API calls. We first propose a framework for pedagogically sound and ethically responsible incorporation of GenAI into educational tools, emphasizing a human-centred design. We then illustrate the application of our framework to the design and implementation of a novel tool called Feedback Copilot, which enables instructors to provide students with personalized qualitative feedback on their assignments in classes of any size. An evaluation involving the generation of feedback from two distinct variations of the Feedback Copilot tool, using numerically graded assignments from 338 students, demonstrates the viability and effectiveness of our approach. Our findings have significant implications for GenAI application researchers, educators seeking to leverage accessible GenAI tools, and educational technologists aiming to transcend the limitations of conversational AI interfaces, thereby charting a course for the future of GenAI in education. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [1] Enhancing Healthcare User Interfaces Through Large Language Models Within the Adaptive User Interface Framework
    Ghosh, Akash
    Huang, Bo
    Yan, Yan
    Lin, Wenjun
    PROCEEDINGS OF NINTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, VOL 5, ICICT 2024, 2024, 1000 : 527 - 540
  • [2] Large Language User Interfaces: Voice Interactive User Interfaces Powered by LLMs
    Wasti, Syed Mekael
    Pu, Ken Q.
    Neshati, Ali
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, INTELLISYS 2024, 2024, 1065 : 639 - 655
  • [3] Implementing Artificial Intelligence in Physiotherapy Education: A Case Study on the Use of Large Language Models (LLM) to Enhance Feedback
    Villagran, Ignacio
    Hernandez, Rocio
    Schuit, Gregory
    Neyem, Andres
    Fuentes-Cimma, Javiera
    Miranda, Constanza
    Hilliger, Isabel
    Duran, Valentina
    Escalona, Gabriel
    Varas, Julian
    IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, 2024, 17 : 2079 - 2090
  • [4] Large language models for sustainable assessment and feedback in higher education
    Agostini, Daniele
    Picasso, Federica
    INTELLIGENZA ARTIFICIALE, 2024, 18 (01) : 121 - 138
  • [5] Large Language Models Meet Next-Generation Networking Technologies: A Review
    Hang, Ching-Nam
    Yu, Pei-Duo
    Morabito, Roberto
    Tan, Chee-Wei
    FUTURE INTERNET, 2024, 16 (10)
  • [6] Propagating Large Language Models Programming Feedback
    Koutcheme, Charles
    Hellas, Arto
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON LEARNING@SCALE, L@S 2024, 2024, : 366 - 370
  • [7] Collaborative Growth: When Large Language Models Meet Sociolinguistics
    Nguyen, Dong
    LANGUAGE AND LINGUISTICS COMPASS, 2025, 19 (02):
  • [8] ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles
    Petridis, Savvas
    Wedin, Ben
    Wexler, James
    Donsbach, Aaron
    Pushkarna, Mahima
    Goyal, Nitesh
    Cai, Carrie J.
    Terry, Michael
    PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024, 2024, : 853 - 868
  • [9] When large language models meet personalization: perspectives of challenges and opportunities
    Chen, Jin
    Liu, Zheng
    Huang, Xu
    Wu, Chenwang
    Liu, Qi
    Jiang, Gangwei
    Pu, Yuanhao
    Lei, Yuxuan
    Chen, Xiaolong
    Wang, Xingmei
    Zheng, Kai
    Lian, Defu
    Chen, Enhong
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2024, 27 (04):
  • [10] Generating Automatic Feedback on UI Mockups with Large Language Models
    Duan, Peitong
    Warner, Jeremy
    Li, Yang
    Hartmann, Bjoern
    PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS (CHI 2024), 2024,