Towards multimodal emotion recognition in e-learning environments

被引:78
作者
Bahreini, Kiavash [1 ]
Nadolski, Rob [1 ]
Westera, Wim [1 ]
机构
[1] Open Univ Netherlands, Ctr Learning Sci & Technol CELSTEC, Valkenburgerweg 177, NL-6419 AT Heerlen, Netherlands
关键词
e-learning; human-computer interaction; multimodal emotion recognition; real-time face emotion recognition; webcam;
D O I
10.1080/10494820.2014.908927
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
This paper presents a framework (FILTWAM (Framework for Improving Learning Through Webcams And Microphones)) for real-time emotion recognition in e-learning by using webcams. FILTWAM offers timely and relevant feedback based upon learner's facial expressions and verbalizations. FILTWAM's facial expression software module has been developed and tested in a proof-of-concept study. The main goal of this study was to validate the use of webcam data for a real-time and adequate interpretation of facial expressions into extracted emotional states. The software was calibrated with 10 test persons. They received the same computer-based tasks in which each of them were requested 100 times to mimic specific facial expressions. All sessions were recorded on video. For the validation of the face emotion recognition software, two experts annotated and rated participants' recorded behaviours. Expert findings were contrasted with the software results and showed an overall value of kappa of 0.77. An overall accuracy of our software based on the requested emotions and the recognized emotions is 72%. Whereas existing software only allows not-real time, discontinuous and obtrusive facial detection, our software allows to continuously and unobtrusively monitor learners' behaviours and converts these behaviours directly into emotional states. This paves the way for enhancing the quality and efficacy of e-learning by including the learner's emotional states.
引用
收藏
页码:590 / 605
页数:16
相关论文
共 50 条
  • [21] emoLearnAdapt: A new approach for an emotion-based adaptation in e-learning environments
    Boughida, Adil
    Kouahla, Mohamed Nadjib
    Lafifi, Yacine
    [J]. EDUCATION AND INFORMATION TECHNOLOGIES, 2024, 29 (12) : 15269 - 15323
  • [22] E-LEARNING IN COMPANY ENVIRONMENTS
    Seitlova, K.
    Czyz, H.
    [J]. EDULEARN13: 5TH INTERNATIONAL CONFERENCE ON EDUCATION AND NEW LEARNING TECHNOLOGIES, 2013, : 758 - 766
  • [23] Research on speech emotion recognition in E-Learning by using neural networks method
    Zhang, Qian
    Wang, Yan
    Wang, Lan
    Wang, Guoqiang
    [J]. 2007 IEEE INTERNATIONAL CONFERENCE ON CONTROL AND AUTOMATION, VOLS 1-7, 2007, : 715 - 718
  • [24] Facial emotion recognition using temporal relational network: an application to E-learning
    Anil Pise
    Hima Vadapalli
    Ian Sanders
    [J]. Multimedia Tools and Applications, 2022, 81 : 26633 - 26653
  • [25] Facial emotion recognition using temporal relational network: an application to E-learning
    Pise, Anil
    Vadapalli, Hima
    Sanders, Ian
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (19) : 26633 - 26653
  • [26] Multimodal motivation modelling and computing towards motivationally intelligent E-learning systems
    Wang, Ruijie
    Chen, Liming
    Ayesh, Aladdin
    [J]. CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2023, 5 (01) : 64 - 81
  • [27] Multimodal motivation modelling and computing towards motivationally intelligent E-learning systems
    Ruijie Wang
    Liming Chen
    Aladdin Ayesh
    [J]. CCF Transactions on Pervasive Computing and Interaction, 2023, 5 : 64 - 81
  • [28] Towards a Multi-modal Emotion-awareness e-Learning System
    Caballe, Santi
    [J]. 2015 INTERNATIONAL CONFERENCE ON INTELLIGENT NETWORKING AND COLLABORATIVE SYSTEMS IEEE INCOS 2015, 2015, : 280 - 287
  • [29] Towards learner-constructed e-learning environments for effective personal learning experiences
    Nganji, Julius T.
    [J]. BEHAVIOUR & INFORMATION TECHNOLOGY, 2018, 37 (07) : 647 - 657
  • [30] Interactive Robot Learning for Multimodal Emotion Recognition
    Yu, Chuang
    Tapus, Adriana
    [J]. SOCIAL ROBOTICS, ICSR 2019, 2019, 11876 : 633 - 642