Facial emotion recognition using temporal relational network: an application to E-learning

被引:0
作者
Anil Pise
Hima Vadapalli
Ian Sanders
机构
[1] University of the Witwatersrand,
[2] University of South Africa,undefined
来源
Multimedia Tools and Applications | 2022年 / 81卷
关键词
Temporal relational network; Deep learning; Segmentation; Learning affect; Relational reasoning; E-learning;
D O I
暂无
中图分类号
学科分类号
摘要
E-learning enables the dissemination of valuable academic information to all users regardless of where they are situated. One of the challenges faced by e-learning systems is the lack of constant interaction between the user and the system. This observability feature is an essential feature of a typical classroom setting and a means of detecting or observing feature reactions and thus such features in the form of expressions should be incorporated into an e-learning platform. The proposed solution is the implementation of a deep-learning-based facial image analysis model to estimate the learning affect and to reflect on the level of student engagement. This work proposes the use of a Temporal Relational Network (TRN), for identifying the changes in the emotions on students’ faces during e-learning session. It is observed that TRN sparsely samples individual frames and then learns their causal relations, which is much more efficient than sampling dense frames and convolving them. In this paper, single-scale and multi-scale temporal relations are considered to achieve the proposed goal. Furthermore, a Multi-Layer Perceptron (MLP) is also tested as a baseline classifier. The proposed framework is end-to-end trainable for video-based Facial Emotion Recognition (FER). The proposed FER model was tested on the open-source DISFA+ database. The TRN based model showed a significant reduction in the length of the feature set which were effective in recognizing expressions. It is observed that the multi-scale TRN has produced better accuracy than the single-scale TRN and MLP with an accuracy of 92.7%, 89.4%, and 86.6% respectively.
引用
收藏
页码:26633 / 26653
页数:20
相关论文
共 61 条
  • [1] Boughrara H(2016)Facial expression recognition based on a mlp neural network using constructive training algorithm Multimed Tools Appl 75 709-731
  • [2] Chtourou M(2020)Hyperparameter optimization in CNN for learning-centered emotion recognition for intelligent tutoring systems Soft Comput 24 7593-7602
  • [3] Amar CB(2016)Audio-visual emotion recognition using big data towards 5g Mob Netw Appl 21 753-763
  • [4] Chen L(2016)Audio-visual emotion recognition using multi-directional regression and ridgelet transform J Multimodal User Interfaces 10 325-333
  • [5] Cabada RZ(2013)The influence of e-learning on individual and collective empowerment in the public sector: an empirical study of Korean Government employees Int Rev Res Open Distance Learn 14 191-213, 09
  • [6] Rangel HR(2017)Video-based emotion recognition in the wild using deep transfer learning and score fusion Image Vis Comput 65 66-75
  • [7] Estrada MLB(2018)A brief review of facial emotion recognition based on visual information Sensors 18 401-2831
  • [8] Lopez HMC(2017)Multimodal 2D + 3D facial expression recognition with deep fusion convolutional neural network IEEE Trans Multimed 19 2816-654
  • [9] Hossain MS(2018)Rest-Net: diverse activation modules and parallel subnets-based CNN for spatial image steg analysis IEEE Signal Process Lett 25 650-48815
  • [10] Muhammad G(2019)A novel 2D and 3D multimodal approach for in-the-wild facial expression recognition Image Vis Comput 92 103817-220