Student behavior analysis to measure engagement levels in online learning environments

被引:26
作者
Altuwairqi, Khawlah [1 ]
Jarraya, Salma Kammoun [1 ,2 ]
Allinjawi, Arwa [1 ]
Hammami, Mohamed [2 ,3 ]
机构
[1] King Abdulaziz Univ, Dept Comp Sci, Jeddah, Saudi Arabia
[2] MIRACL Lab, Sfax, Tunisia
[3] Univ Sfax, Dept Comp Sci, Fac Sci, Sfax, Tunisia
关键词
Academic facial emotions; Keyboard and mouse behaviors; Convolutional neural network (CNN); Affective model; Engagement level; FACIAL EXPRESSION; DEEP;
D O I
10.1007/s11760-021-01869-7
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
After the COVID-19 pandemic, no one refutes the importance of smart online learning systems in the educational process. Measuring student engagement is a crucial step towards smart online learning systems. A smart online learning system can automatically adapt to learners' emotions and provide feedback about their motivations. In the last few decades, online learning environments have generated tremendous interest among researchers in computer-based education. The challenge that researchers face is how to measure student engagement based on their emotions. There has been an increasing interest towards computer vision and camera-based solutions as technology that overcomes the limits of both human observations and expensive equipment used to measure student engagement. Several solutions have been proposed to measure student engagement, but few are behavior-based approaches. In response to these issues, in this paper, we propose a new automatic multimodal approach to measure student engagement levels in real time. Thus, to offer robust and accurate student engagement measures, we combine and analyze three modalities representing students' behaviors: emotions from facial expressions, keyboard keystrokes, and mouse movements. Such a solution operates in real time while providing the exact level of engagement and using the least expensive equipment possible. We validate the proposed multimodal approach through three main experiments, namely single, dual, and multimodal research modalities in novel engagement datasets. In fact, we build new and realistic student engagement datasets to validate our contributions. We record the highest accuracy value (95.23%) for the multimodal approach and the lowest value of "0.04" for mean square error (MSE).
引用
收藏
页码:1387 / 1395
页数:9
相关论文
共 20 条
[1]   A new emotion-based affective model to detect student's engagement [J].
Altuwairqi, Khawlah ;
Jarraya, Salma Kammoun ;
Allinjawi, Arwa ;
Hammami, Mohamed .
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2021, 33 (01) :99-109
[2]   Analysis of Text Entry Performance Metrics [J].
Arif, Ahmed Sabbir ;
Stuerzlinger, Wolfgang .
IEEE TIC-STH 09: 2009 IEEE TORONTO INTERNATIONAL CONFERENCE: SCIENCE AND TECHNOLOGY FOR HUMANITY, 2009, :100-105
[3]  
Arriaga O., 2017, Real-time Convolutional Neural Networks for Emotion and Gender Classification', P221
[4]   Hybrid affective computing-keyboard, mouse and touch screen: from review to experiment [J].
Bakhtiyari, Kaveh ;
Taghavi, Mona ;
Husain, Hafizah .
NEURAL COMPUTING & APPLICATIONS, 2015, 26 (06) :1277-1296
[5]   Generalized Multi-View Embedding for Visual Recognition and Cross-Modal Retrieval [J].
Cao, Guanqun ;
Iosifidis, Alexandros ;
Chen, Ke ;
Gabbouj, Moncef .
IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (09) :2542-2555
[6]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[7]  
D'Errico F, 2016, J E-LEARN KNOWL SOC, V12, P9
[8]   Local Learning With Deep and Handcrafted Features for Facial Expression Recognition [J].
Georgescu, Mariana-Iuliana ;
Ionescu, Radu Tudor ;
Popescu, Marius .
IEEE ACCESS, 2019, 7 :64827-64836
[9]   Challenges in representation learning: A report on three machine learning contests [J].
Goodfellow, Ian J. ;
Erhan, Dumitru ;
Carrier, Pierre Luc ;
Courville, Aaron ;
Mirza, Mehdi ;
Hamner, Ben ;
Cukierski, Will ;
Tang, Yichuan ;
Thaler, David ;
Lee, Dong-Hyun ;
Zhou, Yingbo ;
Ramaiah, Chetan ;
Feng, Fangxiang ;
Li, Ruifan ;
Wang, Xiaojie ;
Athanasakis, Dimitris ;
Shawe-Taylor, John ;
Milakov, Maxim ;
Park, John ;
Ionescu, Radu ;
Popescu, Marius ;
Grozea, Cristian ;
Bergstra, James ;
Xie, Jingjing ;
Romaszko, Lukasz ;
Xu, Bing ;
Chuang, Zhang ;
Bengio, Yoshua .
NEURAL NETWORKS, 2015, 64 :59-63
[10]   Cross-Modal Subspace Learning via Pairwise Constraints [J].
He, Ran ;
Zhang, Man ;
Wang, Liang ;
Ji, Ye ;
Yin, Qiyue .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) :5543-5556