A review on data fusion in multimodal learning analytics and educational data mining

被引:51
作者
Chango, Wilson [1 ]
Lara, Juan A. [2 ]
Cerezo, Rebeca [3 ]
Romero, Cristobal [4 ]
机构
[1] Pontifical Catholic Univ Ecuador, Dept Syst & Comp, Quito, Ecuador
[2] Madrid Open Univ, Dept Comp Sci, Madrid, Spain
[3] Univ Ovideo, Dept Psychol, Oviedo, Spain
[4] Univ Cordoba, DaSCI Inst, Dept Comp Sci, Cordoba, Spain
关键词
data fusion; educational data science; multimodal learning; smart learning; RECOGNITION;
D O I
10.1002/widm.1458
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The new educational models such as smart learning environments use of digital and context-aware devices to facilitate the learning process. In this new educational scenario, a huge quantity of multimodal students' data from a variety of different sources can be captured, fused, and analyze. It offers to researchers and educators a unique opportunity of being able to discover new knowledge to better understand the learning process and to intervene if necessary. However, it is necessary to apply correctly data fusion approaches and techniques in order to combine various sources of multimodal learning analytics (MLA). These sources or modalities in MLA include audio, video, electrodermal activity data, eye-tracking, user logs, and click-stream data, but also learning artifacts and more natural human signals such as gestures, gaze, speech, or writing. This survey introduces data fusion in learning analytics (LA) and educational data mining (EDM) and how these data fusion techniques have been applied in smart learning. It shows the current state of the art by reviewing the main publications, the main type of fused educational data, and the data fusion approaches and techniques used in EDM/LA, as well as the main open problems, trends, and challenges in this specific research area. This article is categorized under: Application Areas > Education and Learning
引用
收藏
页数:19
相关论文
共 61 条
[1]  
Andrade A., 2016, Journal of Learning Analytics, V3, P282, DOI [10.18608/jla.2016.32.14, DOI 10.18608/JLA.2016.32.14]
[2]   Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning [J].
Bahreini, Kiavash ;
Nadolski, Rob ;
Westera, Wim .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2016, 32 (05) :415-430
[3]  
Biggs J., 1987, STUDENT APPROACHES L
[4]  
Blikstein P., 2016, Journal of Learning Analytics, V3, P220, DOI [10.18608/jla.2016.32.11, DOI 10.18608/JLA.2016.32.11]
[5]  
Brodny G., 2017, UNCERTAINTY INTEGRAT
[6]  
Budaher J., 2020, ARXIV PREPRINT ARXIV
[7]   Multi-source and multimodal data fusion for predicting academic performance in blended learning university courses [J].
Chango W. ;
Cerezo R. ;
Romero C. .
Computers and Electrical Engineering, 2021, 89
[8]   Improving prediction of students' performance in intelligent tutoring systems using attribute selection and ensembles of different multimodal data sources [J].
Chango, Wilson ;
Cerezo, Rebeca ;
Sanchez-Santillan, Miguel ;
Azevedo, Roger ;
Romero, Cristobal .
JOURNAL OF COMPUTING IN HIGHER EDUCATION, 2021, 33 (03) :614-634
[9]   A hybrid intelligence-aided approach to affect-sensitive e-learning [J].
Chen, Jingying ;
Luo, Nan ;
Liu, Yuanyuan ;
Liu, Leyuan ;
Zhang, Kun ;
Kolodziej, Joanna .
COMPUTING, 2016, 98 (1-2) :215-233
[10]   Past, present, and future of smart learning: a topic-based bibliometric analysis [J].
Chen, Xieling ;
Zou, Di ;
Xie, Haoran ;
Wang, Fu Lee .
INTERNATIONAL JOURNAL OF EDUCATIONAL TECHNOLOGY IN HIGHER EDUCATION, 2021, 18 (01)