Interactive Design With Gesture and Voice Recognition in Virtual Teaching Environments

被引:3
|
作者
Fang, Ke [1 ]
Wang, Jing [2 ]
机构
[1] Chengdu Normal Univ, Network & Informat Ctr, Chengdu 610000, Peoples R China
[2] Chengdu Normal Univ, Off Advancement Educ Informatizat, Chengdu 610000, Peoples R China
关键词
Speech recognition; Gesture recognition; Solid modeling; Virtual reality; Training; Codes; Game theory; Tracking; Human computer interaction; Recurrent neural networks; Virtual environments; Game engines; hand tracking; human-computer interaction; recurrent neural networks; speech processing; virtual environments;
D O I
10.1109/ACCESS.2023.3348846
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In virtual teaching scenarios, head-mounted display (HMD) interactions often employ traditional controller and UI interactions, which are not very conducive to teaching scenarios that require hand training. Existing improvements in this area have primarily focused on replacing controllers with gesture recognition. However, the exclusive use of gesture recognition may have limitations in certain scenarios, such as complex operations or multitasking environments. This study designed and tested an interaction method that combines simple gestures with voice assistance, aiming to offer a more intuitive user experience and enrich related research. A speech classification model was developed that can be activated via a fist-clenching gesture and is capable of recognising specific Chinese voice commands to initiate various UI interfaces, further controlled by pointing gestures. Virtual scenarios were constructed using Unity, with hand tracking achieved through the HTC OpenXR SDK. Within Unity, hand rendering and gesture recognition were facilitated, and interaction with the UI was made possible using the Unity XR Interaction Toolkit. The interaction method was detailed and exemplified using a teacher training simulation system, including sample code provision. Following this, an empirical test involving 20 participants was conducted, comparing the gesture-plus-voice operation to the traditional controller operation, both quantitatively and qualitatively. The data suggests that while there is no significant difference in task completion time between the two methods, the combined gesture and voice method received positive feedback in terms of user experience, indicating a promising direction for such interactive methods. Future work could involve adding more gestures and expanding the model training dataset to realize additional interactive functions, meeting diverse virtual teaching needs.
引用
收藏
页码:4213 / 4224
页数:12
相关论文
共 50 条
  • [41] Touch in virtual environments - Haptics and the design of interactive systems - Introduction to haptics
    McLaughlin, ML
    Hespanha, JP
    Sukhatme, GS
    TOUCH IN VIRTUAL ENVIRONMENTS: HAPTICS AND THE DESIGN OF INTERACTIVE SYSTEMS, 2002, : 1 - 31
  • [42] Design of Gesture Interface for Interactive Service
    Kim, Sunghan
    Cha, Hongki
    Lee, Seungyun
    2015 INTERNATIONAL CONFERENCE ON ICT CONVERGENCE (ICTC), 2015, : 761 - 764
  • [43] A Kinect-Based Gesture Recognition Approach for the Design of an Interactive Tourism Guide Application
    Minda Gilces, Diana
    Matamoros Torres, Kevin
    TECHNOLOGY TRENDS, 2018, 798 : 115 - 129
  • [44] Interactive design environments
    Castle, Helen
    ARCHITECTURAL DESIGN, 2007, (188) : 5 - 5
  • [45] Design and Analysis of an Interactive MOOC Teaching System Based on Virtual Reality
    Zhang, Yongfei
    Chen, Jianshu
    Miao, Dan
    Zhang, Chen
    INTERNATIONAL JOURNAL OF EMERGING TECHNOLOGIES IN LEARNING, 2018, 13 (07) : 111 - 123
  • [46] Application of Immersive Virtual Reality Interactive Technology in Art Design Teaching
    Ruan, Ying
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [47] Enhanced gesture capture in virtual interactive space (VIS)
    Brooks, AL
    DIGITAL CREATIVITY, 2005, 16 (01) : 43 - 53
  • [48] Natural Calling Gesture Recognition in Crowded Environments
    Phyo, Aye Su
    Fukuda, Hisato
    Lam, Antony
    Kobayashi, Yoshinori
    Kuno, Yoshinori
    INTELLIGENT COMPUTING THEORIES AND APPLICATION, PT I, 2018, 10954 : 8 - 14
  • [49] Spatial Correlation: An Interactive Display of Virtual Gesture Sculpture
    Nam, Jung
    Keefe, Daniel F.
    LEONARDO, 2017, 50 (01) : 94 - 95
  • [50] HandGCNN model for gesture recognition based voice assistance
    Stellin, Rena
    Rukmani, P.
    Anbarasi, L. Jani
    Narayanan, Sathiya
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 42353 - 42369