MuFIN: A Framework for Automating Multimodal Feedback Generation using Generative Artificial Intelligence

被引:0
作者
Lin, Jionghao [1 ]
Chen, Eason [1 ]
Gurung, Ashish [1 ]
Koedinger, Kenneth R. [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON LEARNING@SCALE, L@S 2024 | 2024年
关键词
Generative Artificial Intelligence; Large Language Models; Multimodal Feedback;
D O I
10.1145/3657604.3664720
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Written feedback has long been a cornerstone in educational and professional settings, essential for enhancing learning outcomes. However, multimodal feedback-integrating textual, auditory, and visual cues-promises a more engaging and effective learning experience. By leveraging multiple sensory channels, multimodal feedback better accommodates diverse learning preferences and aids in deeper information retention. Despite its potential, creating multimodal feedback poses challenges, including the need for increased time and resources. Recent advancements in generative artificial intelligence (GenAI) offer solutions to automate the feedback process, predominantly focusing on textual feedback. Yet, the application of GenAI in generating multimodal feedback remains largely unexplored. Our study investigates the use of GenAI techniques to generate multimodal feedback, aiming to provide this feedback for large cohorts of learners, thereby enhancing learning experience and engagement. By exploring the potential of GenAI for this purpose, we propose a framework for automating the generation of multimodal feedback, which we name MuFIN.
引用
收藏
页码:550 / 552
页数:3
相关论文
共 7 条
[1]  
Han Zifei, 2024, INT C ED DAT MIN
[2]   How Can I Get It Right? Using GPT to Rephrase Incorrect Trainee Responses [J].
Lin, Jionghao ;
Han, Zifei ;
Thomas, Danielle R. ;
Gurung, Ashish ;
Gupta, Shivang ;
Aleven, Vincent ;
Koedinger, Kenneth R. .
INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2024, :482-508
[3]  
Lin Jionghao, 2023, AIED 2023 WORKSH FUT
[4]  
Martin J. H., 2009, SPEECH LANGUAGE PROC
[5]  
Thomas Danielle, 2023, LAK2023: LAK23: 13th International Learning Analytics and Knowledge Conference, P250, DOI 10.1145/3576050.3576089
[6]  
Tian LR, 2024, Arxiv, DOI arXiv:2402.17485
[7]   SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation [J].
Zhang, Wenxuan ;
Cun, Xiaodong ;
Wang, Xuan ;
Zhang, Yong ;
Shen, Xi ;
Guo, Yu ;
Shan, Ying ;
Wang, Fei .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :8652-8661