MDNN: Predicting Student Engagement via Gaze Direction and Facial Expression in Collaborative Learning

被引:6
作者
Chen, Yi [1 ]
Zhou, Jin [1 ]
Gao, Qianting [2 ]
Gao, Jing [1 ]
Zhang, Wei [3 ]
机构
[1] Cent China Normal Univ, Sch Comp Sci, Wuhan 430079, Peoples R China
[2] Rensselaer Polytech Inst, Sch Sci, Comp Sci, Troy, NY 12180 USA
[3] Cent China Normal Univ, Natl Engn Lab Educ Big Data Applicat Technol, Wuhan 430079, Peoples R China
来源
CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES | 2023年 / 136卷 / 01期
基金
中国国家自然科学基金;
关键词
Engagement; facial expression; deep network; gaze; JOINT ATTENTION;
D O I
10.32604/cmes.2023.023234
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Prediction of students' engagement in a Collaborative Learning setting is essential to improve the quality of learning. Collaborative learning is a strategy of learning through groups or teams. When cooperative learning behavior occurs, each student in the group should participate in teaching activities. Researchers showed that students who are actively involved in a class gain more. Gaze behavior and facial expression are important nonverbal indicators to reveal engagement in collaborative learning environments. Previous studies require the wearing of sensor devices or eye tracker devices, which have cost barriers and technical interference for daily teaching practice. In this paper, student engagement is automatically analyzed based on computer vision. We tackle the problem of engagement in collaborative learning using a multi-modal deep neural network (MDNN). We combined facial expression and gaze direction as two individual components of MDNN to predict engagement levels in collaborative learning environments. Our multi-modal solution was evaluated in a real collaborative environment. The results show that the model can accurately predict students' performance in the collaborative learning environment.
引用
收藏
页码:381 / 401
页数:21
相关论文
共 48 条
[1]   Improving state-of-the-art in Detecting Student Engagement with Resnet and TCN Hybrid Network [J].
Abedi, Ali ;
Khan, Shehroz S. .
2021 18TH CONFERENCE ON ROBOTS AND VISION (CRV 2021), 2021, :151-157
[2]  
Andersen J.F., 1987, J THOUGHT, V22, P57
[3]  
Andersen J.F., 1979, Journal of Applied Communication Research, V7, P153, DOI DOI 10.1080/00909887909365204
[4]  
Breed G, 1979, NONVERBAL BEHAV TEAC
[5]   THE PRACTICE OF GIVING FEEDBACK TO IMPROVE TEACHING - WHAT IS EFFECTIVE [J].
BRINKO, KT .
JOURNAL OF HIGHER EDUCATION, 1993, 64 (05) :574-593
[6]  
Bryant T., 2019, QUALITATIVE ANAL JOI, DOI [10.22318/cscl2019.923, DOI 10.22318/CSCL2019.923]
[7]   JOINT ATTENTION AND IMITATIVE LEARNING IN CHILDREN, CHIMPANZEES, AND ENCULTURATED CHIMPANZEES [J].
CARPENTER, M ;
TOMASELLO, M ;
SAVAGERUMBAUGH, S .
SOCIAL DEVELOPMENT, 1995, 4 (03) :217-237
[8]   When I Look into Your Eyes: A Survey on Computer Vision Contributions for Human Gaze Estimation and Tracking [J].
Cazzato, Dario ;
Leo, Marco ;
Distante, Cosimo ;
Voos, Holger .
SENSORS, 2020, 20 (13) :1-42
[9]  
Cheng M., 2013, DISTANCE ED CHINA, V3, P59
[10]   Connecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency [J].
Chong, Eunji ;
Ruiz, Nataniel ;
Wang, Yongxin ;
Zhang, Yun ;
Rozga, Agata ;
Rehg, James M. .
COMPUTER VISION - ECCV 2018, PT V, 2018, 11209 :397-412