Leveraging mobile eye-trackers to capture joint visual attention in co-located collaborative learning groups

被引:0
作者
Bertrand Schneider
Kshitij Sharma
Sebastien Cuendet
Guillaume Zufferey
Pierre Dillenbourg
Roy Pea
机构
[1] Harvard University,Graduate School of Education
[2] EPFL,undefined
[3] Stanford University,undefined
来源
International Journal of Computer-Supported Collaborative Learning | 2018年 / 13卷
关键词
Joint visual attention; Collaborative learning; Dual eye-tracking;
D O I
暂无
中图分类号
学科分类号
摘要
This paper describes a promising methodology for studying co-located groups: mobile eye-trackers. We provide a comprehensive description of our data collection and analysis processes so that other researchers can take advantage of this cutting-edge technology. Data were collected in a controlled experiment where 27 student dyads (N = 54) interacted with a Tangible User Interface. They first had to define some design principles for optimizing a warehouse layout by analyzing a set of Contrasting Cases, and build a small-scale layout based on those principles. The contributions of this paper are that: 1) we replicated prior research showing that levels of Joint Visual Attention (JVA) are correlated with collaboration quality across all groups; 2) we then qualitatively analyzed two dyads with high levels of JVA and show that it can hide a free-rider effect (Salomon and Globerson 1989); 3) in conducting this analysis, we additionally developed a new visualization (augmented cross-recurrence graphs) that allows researchers to distinguish between high JVA groups that have balanced and unbalanced levels of participations; 4) finally, we generalized this effect to the entire sample and found a significant negative correlation between dyads’ learning gains and unbalanced levels of participation (as computed from the eye-tracking data). We conclude by discussing implications for automatically analyzing students’ interactions using dual eye-trackers.
引用
收藏
页码:241 / 261
页数:20
相关论文
共 35 条
  • [11] Hayes AF(1989)When teams do not function the way they ought to International Journal of Educational Research 13 89-99
  • [12] Krippendorff K(2013)Real-time mutual gaze perception enhances collaborative learning and collaboration quality International Journal of Computer-Supported Collaborative Learning 8 375-397
  • [13] Mason L(2016)Unpacking the perceptual benefits of a tangible Interface ACM Transactions on Computer-Human Interactions (TOCHI) 23 39-354
  • [14] Pluchino P(1995)The emergence of abstract representations in dyad problem solving The Journal of the Learning Sciences 4 321-undefined
  • [15] Tornatora MC(undefined)undefined undefined undefined undefined-undefined
  • [16] Meier A(undefined)undefined undefined undefined undefined-undefined
  • [17] Spada H(undefined)undefined undefined undefined undefined-undefined
  • [18] Rummel N(undefined)undefined undefined undefined undefined-undefined
  • [19] Mundy P(undefined)undefined undefined undefined undefined-undefined
  • [20] Sigman M(undefined)undefined undefined undefined undefined-undefined