Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning

被引:36
|
作者
Bahreini, Kiavash [1 ]
Nadolski, Rob [1 ]
Westera, Wim [1 ]
机构
[1] Open Univ Netherlands, Fac Psychol & Educ Sci, Res Ctr Learning Teaching & Technol, Welten Inst, Valkenburgerweg 177, NL-6419 AT Heerlen, Netherlands
关键词
FACIAL EXPRESSION; IMPACT; AUDIO;
D O I
10.1080/10447318.2016.1159799
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This article describes the validation study of our software that uses combined webcam and microphone data for real-time, continuous, unobtrusive emotion recognition as part of our FILTWAM framework. FILTWAM aims at deploying a real-time multimodal emotion recognition method for providing more adequate feedback to the learners through an online communication skills training. Herein, timely feedback is needed that reflects on the intended emotions they show and which is also useful to increase learners' awareness of their own behavior. At least, a reliable and valid software interpretation of performed face and voice emotions is needed to warrant such adequate feedback. This validation study therefore calibrates our software. The study uses a multimodal fusion method. Twelve test persons performed computer-based tasks in which they were asked to mimic specific facial and vocal emotions. All test persons' behavior was recorded on video and two raters independently scored the showed emotions, which were contrasted with the software recognition outcomes. A hybrid method for multimodal fusion of our multimodal software shows accuracy between 96.1% and 98.6% for the best-chosen WEKA classifiers over predicted emotions. The software fulfils its requirements of real-time data interpretation and reliable results.
引用
收藏
页码:415 / 430
页数:16
相关论文
共 50 条
  • [1] Towards real-time speech emotion recognition for affective e-learning
    Bahreini K.
    Nadolski R.
    Westera W.
    Education and Information Technologies, 2016, 21 (5) : 1367 - 1386
  • [2] Real-time fear emotion recognition in mice based on multimodal data fusion
    Wang, Hao
    Shi, Zhanpeng
    Hu, Ruijie
    Wang, Xinyi
    Chen, Jian
    Che, Haoyuan
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [3] Real-time music emotion recognition based on multimodal fusion
    Hao, Xingye
    Li, Honghe
    Wen, Yonggang
    ALEXANDRIA ENGINEERING JOURNAL, 2025, 116 : 586 - 600
  • [4] Towards multimodal emotion recognition in e-learning environments
    Bahreini, Kiavash
    Nadolski, Rob
    Westera, Wim
    INTERACTIVE LEARNING ENVIRONMENTS, 2016, 24 (03) : 590 - 605
  • [5] Multimodal emotion recognition system for e-learning platform
    Vani, R. K. Kapila
    Jayashree, P.
    EDUCATION AND INFORMATION TECHNOLOGIES, 2025,
  • [6] Multimodal Attentive Learning for Real-time Explainable Emotion Recognition in Conversations
    Arumugam, Balaji
    Das Bhattacharjee, Sreyasee
    Yuan, Junsong
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 1210 - 1214
  • [7] Real-Time Emotion Classification Using EEG Data Stream in E-Learning Contexts
    Nandi, Arijit
    Xhafa, Fatos
    Subirats, Laia
    Fort, Santi
    SENSORS, 2021, 21 (05) : 1 - 26
  • [8] Character agents in e-learning interface using multimodal real-time interaction
    Wang, Hua
    Yang, He
    Chignell, Mark
    Ishizuka, Mitsuru
    HUMAN-COMPUTER INTERACTION, PT 3, PROCEEDINGS, 2007, 4552 : 225 - +
  • [9] Deep CNN with late fusion for real time multimodal emotion recognition
    Dixit, Chhavi
    Satapathy, Shashank Mouli
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 240
  • [10] Real-time learning behavior mining for e-learning
    Kuo, YH
    Chen, JN
    Jeng, YL
    Huang, YM
    2005 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE, PROCEEDINGS, 2005, : 653 - 656