Exploring Indicators for Collaboration Quality and Its Dimensions in Classroom Settings Using Multimodal Learning Analytics

被引:2
作者
Chejara, Pankaj [1 ]
Prieto, Luis P. [2 ]
Rodriguez-Triana, Maria Jesus [1 ]
Ruiz-Calleja, Adolfo [1 ]
Kasepalu, Reet [1 ]
Chounta, Irene-Angelica [3 ]
Schneider, Bertrand [4 ]
机构
[1] Tallinn Univ, Tallinn, Estonia
[2] Univ Valladolid, Valladolid, Spain
[3] Univ Duisburg Essen, Duisburg, Germany
[4] Harvard Univ, Cambridge, MA 02138 USA
来源
RESPONSIVE AND SUSTAINABLE EDUCATIONAL FUTURES, EC-TEL 2023 | 2023年 / 14200卷
关键词
Multimodal Learning Analytics; MMLA; Computer-Supported Collaborative Learning; CSCL; Collaboration Quality; Correlation Analysis; Machine Learning; Clustering; Facial Action Units; KNOWLEDGE; FRAMEWORK; STUDENTS; SIGNALS;
D O I
10.1007/978-3-031-42682-7_5
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Multimodal Learning Analytics researchers have explored relationships between collaboration quality and multimodal data. However, the current state-of-art research works have scarcely investigated authentic settings and seldom used video data that can offer rich behavioral information. In this paper, we present our findings on potential indicators for collaboration quality and its underlying dimensions such as argumentation, and mutual understanding. We collected multimodal data (namely, video and logs) from 4 Estonian classrooms during authentic computer-supported collaborative learning activities. Our results show that vertical head movement (looking up and down) and mouth region features could be used as potential indicators for collaboration quality and its aforementioned dimensions. Also, our results from clustering provide indications of the potential of video data for identifying different levels of collaboration quality (e.g., high, low, medium). The findings have implications for building collaboration quality monitoring and guiding systems for authentic classroom settings.
引用
收藏
页码:60 / 74
页数:15
相关论文
共 35 条
[1]  
Blikstein Paulo., 2016, Journal of Learning Analytics, V3, P220, DOI [10.18608/jla.2016.32.11, DOI 10.18608/JLA.2016.32.11]
[2]  
Cai Y., 2020, ICCE 2020 28 INT C C, V1, P119
[3]  
Chejara Pankaj, 2023, LAK2023: LAK23: 13th International Learning Analytics and Knowledge Conference, P111, DOI 10.1145/3576050.3576144
[4]   EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA [J].
Chejara, Pankaj ;
Prieto, Luis P. ;
Ruiz-Calleja, Adolfo ;
Rodriguez-Triana, Maria Jesus ;
Shankar, Shashi Kant ;
Kasepalu, Reet .
SENSORS, 2021, 21 (08)
[5]  
Chounta IA, 2015, COMPUT INFORM, V34, P588
[6]   Technologies for automated analysis of co-located, real-life, physical learning spaces: Where are we now? [J].
Chua, Yi Han Victoria ;
Dauwels, Justin ;
Tan, Seng Chee .
PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON LEARNING ANALYTICS & KNOWLEDGE (LAK'19), 2019, :11-20
[7]   Emote aloud during learning with AutoTutor: Applying the facial action coding system to cognitive-affective states during learning [J].
Craig, Scotty D. ;
D'Mello, Sidney ;
Witherspoon, Amy ;
Graesser, Art .
COGNITION & EMOTION, 2008, 22 (05) :777-788
[8]   Multimodal Learning Analytics research with young children: A systematic review [J].
Crescenzi-Lanna, Lucrezia .
BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, 2020, 51 (05) :1485-1504
[9]   The NISPI framework: Analysing collaborative problem-solving from students' physical interactions [J].
Cukurova, Mutlu ;
Luckin, Rose ;
Millan, Eva ;
Mavrikis, Manolis .
COMPUTERS & EDUCATION, 2018, 116 :93-109
[10]   From signals to knowledge: A conceptual model for multimodal learning analytics [J].
Di Mitri, Daniele ;
Schneider, Jan ;
Specht, Marcus ;
Drachsler, Hendrik .
JOURNAL OF COMPUTER ASSISTED LEARNING, 2018, 34 (04) :338-349