Detecting Learning Stages within a Sensor-Based Mixed Reality Learning Environment Using Deep Learning

被引:3
|
作者
Ogunseiju, Omobolanle [1 ]
Akinniyi, Abiola [2 ]
Gonsalves, Nihar [2 ]
Khalid, Mohammad [2 ]
Akanmu, Abiola [3 ]
机构
[1] Georgia Tech, Coll Design, Sch Bldg Construct, Atlanta, GA 30332 USA
[2] Virginia Tech, Myers Lawson Sch Construct, Blacksburg, VA 24060 USA
[3] Virginia Tech, Myers Lawson Sch Construct, Construct Engn & Management, Blacksburg, VA 24060 USA
基金
美国国家科学基金会;
关键词
Mixed reality; Laser scanning; Deep learning; Eye tracking; Construction education; EYE-TRACKING; VIRTUAL-REALITY; CLASSIFICATION; FIXATION; FRAME;
D O I
10.1061/JCCEE5.CPENG-5169
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Mixed reality has been envisioned as an interactive and engaging pedagogical tool for providing experiential learning experiences and potentially enhancing the acquisition of technical competencies in construction engineering education. However, to achieve seamless learning interactions and automated learning assessments, it is pertinent that the mixed reality environments are intelligent, proactive, and adaptive to students' learning needs. With the potentials of artificial intelligence for promoting interactive, assistive, and self-reliant learning environments and the professed effectiveness of deep learning in other domains, this study explores an approach to developing a smart mixed reality environment for technical skills acquisition in construction engineering education. The study is based on the usability assessment of a previously developed mixed reality environment for learning sensing technologies such as laser scanners in the construction industry. In this study, long short-term memory (LSTM) and a hybrid LSTM and convolutional neural networks (CNN) models were trained with augmented eye-tracking data to predict students' learning interaction difficulties, cognitive development, and experience levels. This was achieved by using predefined labels obtained from think-aloud protocols and demographic questionnaires during laser scanning activities within a mixed reality learning environment. The proposed models performed well in recognizing interaction difficulties, experienced levels, and cognitive development with F1 scores of 95.95%, 98.52%, and 99.49% respectively. The hybrid CNN-LSTM models demonstrated improved performance with an accuracy of at least 20% higher than the LSTM models but at a higher inference time. The efficacy of the models for detecting the required classes, and the potentials of the adopted data augmentation techniques for eye-tracking data were further reported. However, as the model performance increased with data size, the computational cost also increased. This study sets precedence for exploring the applications of artificial intelligence for mixed reality learning environments in construction engineering education.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Detecting location of fire in video stream environment using deep learning
    Kim Y.-J.
    Cho H.-C.
    Cho, Hyun-Chong (hyuncho@kangwon.ac.kr), 1600, Korean Institute of Electrical Engineers (69): : 474 - 479
  • [42] Automated detection of learning stages and interaction difficulty from eye-tracking data within a mixed reality learning environmen
    Ogunseiju, Omobolanle Ruth
    Gonsalves, Nihar
    Akanmu, Abiola Abosede
    Abraham, Yewande
    Nnaji, Chukwuma
    SMART AND SUSTAINABLE BUILT ENVIRONMENT, 2024, 13 (06) : 1473 - 1489
  • [43] MIXED REALITY BASED ENVIRONMENT FOR LEARNING SENSING TECHNOLOGY APPLICATIONS IN CONSTRUCTION
    Ogunseiju, Omobolanle O.
    Akanmu, Abiola A.
    Bairaktarova, Diana
    JOURNAL OF INFORMATION TECHNOLOGY IN CONSTRUCTION, 2021, 26 : 863 - 885
  • [44] Mixed reality based environment for learning sensing technology applications in construction
    Ogunseiju O.O.
    Akanmu A.A.
    Bairaktarova D.
    Journal of Information Technology in Construction, 2021, 26 : 863 - 885
  • [45] Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities
    Chen, Kaixuan
    Zhang, Dalin
    Yao, Lina
    Guo, Bin
    Yu, Zhiwen
    Liu, Yunhao
    ACM COMPUTING SURVEYS, 2021, 54 (04)
  • [46] Hybrid deep learning approaches for smartphone sensor-based human activity recognition
    Ghate, Vasundhara
    Hemalatha, Sweetlin C.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (28-29) : 35585 - 35604
  • [47] Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning
    Nafea, Ohoud
    Abdul, Wadood
    Muhammad, Ghulam
    Alsulaiman, Mansour
    SENSORS, 2021, 21 (06) : 1 - 20
  • [48] Mixed Reality Environment for Web-Based Laboratory Interactive Learning
    Saleem, A. I.
    Al-Aubidy, K. M.
    INTERNATIONAL JOURNAL OF ONLINE ENGINEERING, 2008, 4 (01) : 40 - 45
  • [49] Comprehensive machine and deep learning analysis of sensor-based human activity recognition
    Hossam Magdy Balaha
    Asmaa El-Sayed Hassan
    Neural Computing and Applications, 2023, 35 : 12793 - 12831
  • [50] A Deep Learning Tool Using Teaching Learning-Based Optimization for Supporting Smart Learning Environment
    Sooncharoen, Saisumpan
    Thepphakorn, Thatchai
    Pongcharoen, Pupong
    BLENDED LEARNING: EDUCATION IN A SMART LEARNING ENVIRONMENT, ICBL 2020, 2020, 12218 : 392 - 404