Facial emotion recognition of deaf and hard-of-hearing students for engagement detection using deep learning

被引:18
作者
Lasri, Imane [1 ]
Riadsolh, Anouar [1 ]
Elbelkacemi, Mourad [1 ]
机构
[1] Mohammed V Univ Rabat, Fac Sci, Lab Concept & Syst Elect Signals & Informat, Rabat, Morocco
关键词
Facial emotion recognition; Deep convolutional neural networks; Transfer learning; Deafness; Student engagement; EXPRESSION RECOGNITION; FACE;
D O I
10.1007/s10639-022-11370-4
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Nowadays, facial expression recognition (FER) has drawn considerable attention from the research community in various application domains due to the recent advancement of deep learning. In the education field, facial expression recognition has the potential to evaluate students' engagement in a classroom environment, especially for deaf and hard-of-hearing students. Several works have been conducted on detecting students' engagement from facial expressions using traditional machine learning or convolutional neural network (CNN) with only a few layers. However, measuring deaf and hard-of-hearing students' engagement is yet an unexplored area for experimental research. Therefore, we propose in this study a novel approach for detecting the engagement level ('highly engaged', 'nominally engaged', and 'not engaged') from the facial emotions of deaf and hard-of-hearing students using a deep CNN (DCNN) model and transfer learning (TL) technique. A pre-trained VGG-16 model is employed and fine-tuned on the Japanese female facial expression (JAFFE) dataset and the Karolinska directed emotional faces (KDEF) dataset. Then, the performance of the proposed model is compared to seven different pre-trained DCNN models (VGG-19, Inception v3, DenseNet-121, DenseNet-169, MobileNet, ResNet-50, and Xception). On the 10-fold cross-validation case, the best-achieved test accuracies with VGG-16 are 98% and 99% on JAFFE and KDEF datasets, respectively. According to the obtained results, the proposed approach outperformed other state-of-the-art methods.
引用
收藏
页码:4069 / 4092
页数:24
相关论文
共 39 条
  • [11] CONSTANTS ACROSS CULTURES IN FACE AND EMOTION
    EKMAN, P
    FRIESEN, WV
    [J]. JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1971, 17 (02) : 124 - &
  • [12] Ellaban H., 2017, INT J COMPUT APPL, V159, P23, DOI [10.5120/ijca2017913009, DOI 10.5120/IJCA2017913009]
  • [13] Eng S. K., 2019, IOP Conference Series: Materials Science and Engineering, V705, DOI 10.1088/1757-899X/705/1/012031
  • [14] Hamester D, 2015, IEEE IJCNN
  • [15] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [16] Improved gradient local ternary patterns for facial expression recognition
    Holder, Ross P.
    Tapamo, Jules R.
    [J]. EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2017,
  • [17] Howard AG, 2017, ARXIV
  • [18] HUANG G, 2017, PROC CVPR IEEE, P2261, DOI [DOI 10.1109/CVPR.2017.243, 10.1109/CVPR.2017.243]
  • [19] Hybrid deep neural networks for face emotion recognition
    Jain, Neha
    Kumar, Shishir
    Kumar, Amit
    Shamsolmoali, Pourya
    Zareapoor, Masoumeh
    [J]. PATTERN RECOGNITION LETTERS, 2018, 115 : 101 - 106
  • [20] Diagnosing Parkinson Disease Through Facial Expression Recognition: Video Analysis
    Jin, Bo
    Qu, Yue
    Zhang, Liang
    Gao, Zhan
    [J]. JOURNAL OF MEDICAL INTERNET RESEARCH, 2020, 22 (07)