Multimodal teaching, learning and training in virtual reality: a review and case study

被引:0
作者
Philippe S. [1 ]
Souchet A.D. [1 ,2 ]
Lameras P. [3 ]
Petridis P. [4 ]
Caporal J. [1 ]
Coldeboeuf G. [5 ]
Duzan H. [5 ]
机构
[1] Department of Research, Manzalab, Paris
[2] Paragraph Laboratory, Paris 8 University, Saint-Denis
[3] School of Computing, Electronics and Mathematics, Coventry University
[4] Aston Business School, Aston University
关键词
Multimodality; Semiotic resources; Teaching and learning; Training; Virtual reality;
D O I
10.1016/j.vrih.2020.07.008
中图分类号
学科分类号
摘要
It is becoming increasingly prevalent in digital learning research to encompass an array of different meanings, spaces, processes, and teaching strategies for discerning a global perspective on constructing the student learning experience. Multimodality is an emergent phenomenon that may influence how digital learning is designed, especially when employed in highly interactive and immersive learning environments such as Virtual Reality (VR). VR environments may aid students' efforts to be active learners through consciously attending to, and reflecting on, critique leveraging reflexivity and novel meaning-making most likely to lead to a conceptual change. This paper employs eleven industrial case-studies to highlight the application of multimodal VR-based teaching and training as a pedagogically rich strategy that may be designed, mapped and visualized through distinct VR-design elements and features. The outcomes of the use cases contribute to discern in-VR multimodal teaching as an emerging discourse that couples system design-based paradigms with embodied, situated and reflective praxis in spatial, emotional and temporal VR learning environments. © 2019 Beijing Zhongke Journal Publishing Co. Ltd
引用
收藏
页码:421 / 442
页数:21
相关论文
共 106 条
[31]  
Kumar P., Agrawal A., Prasad S., Multimodal interface for temporal pattern based interactive large volumetric visualization, TENCON 2017-2017 IEEE Region 10 Conference, pp. 1239-1244, (2017)
[32]  
Godec M., Sternig S., Roth P.M., Bischof H., Context-driven clustering by multi-class classification in an active learning framework, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 19-24, (2010)
[33]  
Zhou X.K., Liang W., Wang K.I.K., Shimizu S., Multi-modality behavioral influence analysis for personalized recommendations in health social media environment, IEEE Transactions on Computational Social Systems, 6, 5, pp. 888-897, (2019)
[34]  
Kuang Q., Jin X., Zhao Q.P., Zhou B., Deep multimodality learning for UAV video aesthetic quality assessment, IEEE Transactions on Multimedia, (2020)
[35]  
Gee J.P., What video games have to teach us about learning and literacy, (2004)
[36]  
Norris S., Modal density and modal configurations, The Routledge Handbook of Multimodal Analysis, (2009)
[37]  
Wandera D.B., Teaching poetry through collaborative art: an analysis of multimodal ensembles for transformative learning, Journal of Transformative Education, 14, 4, pp. 305-326, (2016)
[38]  
Gilakjani A.P., Ismail H.N., Ahmadi S.M., The effect of multimodal learning models on language teaching and learning, Theory and Practice in Language Studies, 1, 10, pp. 1321-1327, (2011)
[39]  
Farias M., Obilinovic K., Orrego R., Implications of Multimodal Learning Models for foreign language teaching and learning, Colombian Applied Linguistics Journal, 9, (2011)
[40]  
Miller S.M., McVee M.B., Thompson M.K., Lauricella A.M., Boyd F.B., A Literacy Pedagogy for Multimodal Composing: Transforming Learning and Teaching, (2013)