Engagement Detection Based on Analyzing Micro Body Gestures Using 3D CNN

被引:5
作者
Khenkar, Shoroog [1 ]
Jarraya, Salma Kammoun [1 ,2 ]
机构
[1] King Abdulaziz Univ, Dept Comp Sci, Jeddah, Saudi Arabia
[2] MIRACL Lab, Sfax, Tunisia
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2022年 / 70卷 / 02期
关键词
Micro body gestures; engagement detection; 3D CNN; transfer learning; e-learning; spatiotemporal features; STUDENT ENGAGEMENT; FACIAL EXPRESSIONS; RECOGNITION;
D O I
10.32604/cmc.2022.019152
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a novel, efficient and affordable approach to detect the students' engagement levels in an e-learning environment by using webcams. Our method analyzes spatiotemporal features of e-learners' micro body gestures, which will be mapped to emotions and appropriate engagement states. The proposed engagement detection model uses a three-dimensional convolutional neural network to analyze both temporal and spatial information across video frames. We follow a transfer learning approach by using the C3D model that was trained on the Sports-1M dataset. The adopted C3D model was used based on two different approaches; as a feature extractor with linear classifiers and a classifier after applying fine-tuning to the pretrained model. Our model was tested and its performance was evaluated and compared to the existing models. It proved its effectiveness and superiority over the other existing methods with an accuracy of 94%. The results of this work will contribute to the development of smart and interactive e-learning systems with adaptive responses based on users' engagement levels.
引用
收藏
页码:2655 / 2677
页数:23
相关论文
共 44 条
  • [1] Altuwairqi K., 2018, J KING SAUD UNIV-COM, V33, P1
  • [2] [Anonymous], 2014, CoRR
  • [3] Baxter J., 2018, EXPLORING EXAMINING, V1st, P1
  • [4] Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring Systems
    Behera, Ardhendu
    Matthew, Peter
    Keidel, Alexander
    Vangorp, Peter
    Fang, Hui
    Canning, Susan
    [J]. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2020, 30 (02) : 236 - 270
  • [5] Benson S.N. K., 2015, Technological pedagogical content knowledge: exploring, developing, and assessing TPCK, P3, DOI [DOI 10.1007/978-1-4899-8080-9_1, DOI 10.1007/978-1-4899-8080-9]
  • [6] Boiman O, 2005, IEEE I CONF COMP VIS, P462
  • [7] Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
    Carreira, Joao
    Zisserman, Andrew
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4724 - 4733
  • [8] Chao H., 2019, P 2019 INT C ARTIFIC, P1, DOI [DOI 10.1145/3358331.3358379, 10.1145/3358331.3358379]
  • [9] Chen Liming, 2019, Human Activity Recognition and Behaviour Analysis: For Cyber-Physical Systems in Smart Environments, P23
  • [10] DCunha A., 2016, ARXIV PREPRINT ARXIV