Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment

被引:10
作者
Ng, Jeremy Tzi-Dong [1 ]
Hu, Xiao [1 ]
Que, Ying [1 ]
机构
[1] Univ Hong Kong, Fac Educ, Hong Kong, Peoples R China
来源
LAK22 CONFERENCE PROCEEDINGS: THE TWELFTH INTERNATIONAL CONFERENCE ON LEARNING ANALYTICS & KNOWLEDGE | 2022年
关键词
Virtual reality; Eye-tracking; Multimodal learning analytics; Cultural heritage; Multimedia learning;
D O I
10.1145/3506860.3506881
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners' self-report data. Learners' attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners' eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners' attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research.
引用
收藏
页码:451 / 457
页数:7
相关论文
共 23 条
  • [11] Lee HJ, 2017, VIRTUAL ARCHAEOL REV, V8, P69, DOI 10.4995/var.2017.5962
  • [12] Ng Jeremy Tzi-Dong, 2020, P ACM IEEE JOINT C D, P365, DOI [10.1145/3383583.3398603, DOI 10.1145/3383583.3398603]
  • [13] Paivio A., 1971, IMAGERY VERBAL PROCE
  • [14] When two computer-supported learning strategies are better than one: An eye-tracking study
    Ponce, Hector R.
    Mayer, Richard E.
    Soledad Loyola, Maria
    Lopez, Mario J.
    Mendez, Ester E.
    [J]. COMPUTERS & EDUCATION, 2018, 125 : 376 - 388
  • [15] ScoolAR: An Educational Platform to Improve Students' Learning Through Virtual Reality
    Puggioni, Mariapaola
    Frontoni, Emanuele
    Paolanti, Marina
    Pierdicca, Roberto
    [J]. IEEE ACCESS, 2021, 9 : 21059 - 21070
  • [16] Salvucci Dario D., 2000, P EYE TRACK RES APPL, P71, DOI [DOI 10.1145/355017, DOI 10.1145/355017.355028, 10.1145/355017.355028]
  • [17] Heritage in lockdown: digital provision of memory institutions in the UK and US of America during the COVID-19 pandemic
    Samaroudi, Myrsini
    Echavarria, Karina Rodriguez
    Perry, Lara
    [J]. MUSEUM MANAGEMENT AND CURATORSHIP, 2020, 35 (04) : 337 - 361
  • [18] Decoding the User Experience in Mobile Virtual Reality Narratives
    Sarker, Biswajit
    [J]. VIRTUAL, AUGMENTED AND MIXED REALITY, 2017, 10280 : 437 - 452
  • [19] Multimodal data capabilities for learning: What can multimodal data tell us about learning?
    Sharma, Kshitij
    Giannakos, Michail
    [J]. BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, 2020, 51 (05) : 1450 - 1484
  • [20] The impact of multimedia effect on science learning: Evidence from eye movements
    She, Hsiao-Ching
    Chen, Yi-Zen
    [J]. COMPUTERS & EDUCATION, 2009, 53 (04) : 1297 - 1307