A review on data fusion in multimodal learning analytics and educational data mining

被引:51
作者
Chango, Wilson [1 ]
Lara, Juan A. [2 ]
Cerezo, Rebeca [3 ]
Romero, Cristobal [4 ]
机构
[1] Pontifical Catholic Univ Ecuador, Dept Syst & Comp, Quito, Ecuador
[2] Madrid Open Univ, Dept Comp Sci, Madrid, Spain
[3] Univ Ovideo, Dept Psychol, Oviedo, Spain
[4] Univ Cordoba, DaSCI Inst, Dept Comp Sci, Cordoba, Spain
关键词
data fusion; educational data science; multimodal learning; smart learning; RECOGNITION;
D O I
10.1002/widm.1458
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The new educational models such as smart learning environments use of digital and context-aware devices to facilitate the learning process. In this new educational scenario, a huge quantity of multimodal students' data from a variety of different sources can be captured, fused, and analyze. It offers to researchers and educators a unique opportunity of being able to discover new knowledge to better understand the learning process and to intervene if necessary. However, it is necessary to apply correctly data fusion approaches and techniques in order to combine various sources of multimodal learning analytics (MLA). These sources or modalities in MLA include audio, video, electrodermal activity data, eye-tracking, user logs, and click-stream data, but also learning artifacts and more natural human signals such as gestures, gaze, speech, or writing. This survey introduces data fusion in learning analytics (LA) and educational data mining (EDM) and how these data fusion techniques have been applied in smart learning. It shows the current state of the art by reviewing the main publications, the main type of fused educational data, and the data fusion approaches and techniques used in EDM/LA, as well as the main open problems, trends, and challenges in this specific research area. This article is categorized under: Application Areas > Education and Learning
引用
收藏
页数:19
相关论文
共 61 条
[31]   A three-dimensional model of student interest during learning using multimodal fusion with natural sensing technology [J].
Luo, Zhenzhen ;
Chen Jingying ;
Wang Guangshuai ;
Liao Mengyi .
INTERACTIVE LEARNING ENVIRONMENTS, 2022, 30 (06) :1117-1130
[32]  
Ma H., 2015, J INFORM TECHNOLOGY, V4, P82, DOI DOI 10.14355/JITAE.2015.04.013
[33]   Classroom Micro-Expression Recognition Algorithms Based on Multi-Feature Fusion [J].
Mao, Lidan ;
Wang, Ning ;
Wang, Lei ;
Chen, Yu .
IEEE ACCESS, 2019, 7 :64978-64983
[34]   Review on publicly available datasets for educational data mining [J].
Mihaescu, Marian Cristian ;
Popescu, Paul Stefan .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 11 (03)
[35]   Automated Detection of Engagement Using Video-Based Estimation of Facial Expressions and Heart Rate [J].
Monkaresi, Hamed ;
Bosch, Nigel ;
Calvo, Rafael A. ;
D'Mello, Sidney K. .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2017, 8 (01) :15-28
[36]   Multimodal Data Fusion in Learning Analytics: A Systematic Review [J].
Mu, Su ;
Cui, Meng ;
Huang, Xiaodi .
SENSORS, 2020, 20 (23) :1-27
[37]  
Nam Liao S., 2019, P 50 ACM TECHN S COM, DOI [10.1145/3287324, DOI 10.1145/3287324]
[38]   Improving the Performance of Neural Networks with an Ensemble of Activation Functions [J].
Nandi, Arijit ;
Jana, Nanda Dulal ;
Das, Swagatam .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[39]  
Noor A.S.M., 2019, INT J ELECT COMPUTER, V9, P5420, DOI [10.11591/ijece.v9i6.pp5420-5427, DOI 10.11591/IJECE.V9I6.PP5420-5427]
[40]  
Ochoa X., 2017, Handbook of Learning Analytics (Society for Learning Analytics Research (SoLAR)), P129, DOI [10.18608/hla17.011, DOI 10.18608/HLA17.011]