Detecting Learning Stages within a Sensor-Based Mixed Reality Learning Environment Using Deep Learning

被引:3
|
作者
Ogunseiju, Omobolanle [1 ]
Akinniyi, Abiola [2 ]
Gonsalves, Nihar [2 ]
Khalid, Mohammad [2 ]
Akanmu, Abiola [3 ]
机构
[1] Georgia Tech, Coll Design, Sch Bldg Construct, Atlanta, GA 30332 USA
[2] Virginia Tech, Myers Lawson Sch Construct, Blacksburg, VA 24060 USA
[3] Virginia Tech, Myers Lawson Sch Construct, Construct Engn & Management, Blacksburg, VA 24060 USA
基金
美国国家科学基金会;
关键词
Mixed reality; Laser scanning; Deep learning; Eye tracking; Construction education; EYE-TRACKING; VIRTUAL-REALITY; CLASSIFICATION; FIXATION; FRAME;
D O I
10.1061/JCCEE5.CPENG-5169
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Mixed reality has been envisioned as an interactive and engaging pedagogical tool for providing experiential learning experiences and potentially enhancing the acquisition of technical competencies in construction engineering education. However, to achieve seamless learning interactions and automated learning assessments, it is pertinent that the mixed reality environments are intelligent, proactive, and adaptive to students' learning needs. With the potentials of artificial intelligence for promoting interactive, assistive, and self-reliant learning environments and the professed effectiveness of deep learning in other domains, this study explores an approach to developing a smart mixed reality environment for technical skills acquisition in construction engineering education. The study is based on the usability assessment of a previously developed mixed reality environment for learning sensing technologies such as laser scanners in the construction industry. In this study, long short-term memory (LSTM) and a hybrid LSTM and convolutional neural networks (CNN) models were trained with augmented eye-tracking data to predict students' learning interaction difficulties, cognitive development, and experience levels. This was achieved by using predefined labels obtained from think-aloud protocols and demographic questionnaires during laser scanning activities within a mixed reality learning environment. The proposed models performed well in recognizing interaction difficulties, experienced levels, and cognitive development with F1 scores of 95.95%, 98.52%, and 99.49% respectively. The hybrid CNN-LSTM models demonstrated improved performance with an accuracy of at least 20% higher than the LSTM models but at a higher inference time. The efficacy of the models for detecting the required classes, and the potentials of the adopted data augmentation techniques for eye-tracking data were further reported. However, as the model performance increased with data size, the computational cost also increased. This study sets precedence for exploring the applications of artificial intelligence for mixed reality learning environments in construction engineering education.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Environment-aware Sensor Fusion using Deep Learning
    Silva, Caio Fischer
    Borges, Paulo V. K.
    Castanho, Jose E. C.
    ICINCO: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL 2, 2019, : 88 - 96
  • [32] Recent Machine Learning Advancements in Sensor-Based Mobility Analysis: Deep Learning for Parkinson's Disease Assessment
    Eskofier, Bjoern M.
    Lee, Sunghoon I.
    Daneault, Jean-Francois
    Golabchi, Fatemeh N.
    Ferreira-Carvalho, Gabriela
    Vergara-Diaz, Gloria
    Sapienza, Stefano
    Costante, Gianluca
    Klucken, Jochen
    Kautz, Thomas
    Bonato, Paolo
    2016 38TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2016, : 655 - 658
  • [33] Representation Learning for Sensor-based Device Pairing
    Ngu Nguyen
    Jaehne-Raden, Nico
    Kulau, Ulf
    Sigg, Stephan
    2018 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS (PERCOM WORKSHOPS), 2018,
  • [34] Machine Learning for Sensor-Based Manufacturing Processes
    Moldovan, Dorin
    Cioara, Tudor
    Anghel, Ionut
    Salomie, Ioan
    2017 13TH IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP), 2017, : 147 - 154
  • [35] Using Deep Learning to Detecting Deepfakes
    Mallet, Jacob
    Dave, Rushit
    Seliya, Naeem
    Vanamala, Mounika
    2022 9TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE, ISCMI, 2022, : 1 - 5
  • [36] Comprehensive machine and deep learning analysis of sensor-based human activity recognition
    Balaha, Hossam Magdy
    Hassan, Asmaa El-Sayed
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (17): : 12793 - 12831
  • [37] A Multitask Deep Learning Approach for Sensor-Based Human Activity Recognition and Segmentation
    Duan, Furong
    Zhu, Tao
    Wang, Jinqiang
    Chen, Liming
    Ning, Huansheng
    Wan, Yaping
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [38] Involvement of Deep Learning for Vision Sensor-Based Autonomous Driving Control: A Review
    Khanum, Abida
    Lee, Chao-Yang
    Yang, Chu-Sing
    IEEE SENSORS JOURNAL, 2023, 23 (14) : 15321 - 15341
  • [39] Hybrid deep learning approaches for smartphone sensor-based human activity recognition
    Vasundhara Ghate
    Sweetlin Hemalatha C
    Multimedia Tools and Applications, 2021, 80 : 35585 - 35604
  • [40] Wearable Sensor-Based Human Activity Recognition with Hybrid Deep Learning Model
    Luwe, Yee Jia
    Lee, Chin Poo
    Lim, Kian Ming
    INFORMATICS-BASEL, 2022, 9 (03):