Interactive Design With Gesture and Voice Recognition in Virtual Teaching Environments

被引:3
|
作者
Fang, Ke [1 ]
Wang, Jing [2 ]
机构
[1] Chengdu Normal Univ, Network & Informat Ctr, Chengdu 610000, Peoples R China
[2] Chengdu Normal Univ, Off Advancement Educ Informatizat, Chengdu 610000, Peoples R China
关键词
Speech recognition; Gesture recognition; Solid modeling; Virtual reality; Training; Codes; Game theory; Tracking; Human computer interaction; Recurrent neural networks; Virtual environments; Game engines; hand tracking; human-computer interaction; recurrent neural networks; speech processing; virtual environments;
D O I
10.1109/ACCESS.2023.3348846
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In virtual teaching scenarios, head-mounted display (HMD) interactions often employ traditional controller and UI interactions, which are not very conducive to teaching scenarios that require hand training. Existing improvements in this area have primarily focused on replacing controllers with gesture recognition. However, the exclusive use of gesture recognition may have limitations in certain scenarios, such as complex operations or multitasking environments. This study designed and tested an interaction method that combines simple gestures with voice assistance, aiming to offer a more intuitive user experience and enrich related research. A speech classification model was developed that can be activated via a fist-clenching gesture and is capable of recognising specific Chinese voice commands to initiate various UI interfaces, further controlled by pointing gestures. Virtual scenarios were constructed using Unity, with hand tracking achieved through the HTC OpenXR SDK. Within Unity, hand rendering and gesture recognition were facilitated, and interaction with the UI was made possible using the Unity XR Interaction Toolkit. The interaction method was detailed and exemplified using a teacher training simulation system, including sample code provision. Following this, an empirical test involving 20 participants was conducted, comparing the gesture-plus-voice operation to the traditional controller operation, both quantitatively and qualitatively. The data suggests that while there is no significant difference in task completion time between the two methods, the combined gesture and voice method received positive feedback in terms of user experience, indicating a promising direction for such interactive methods. Future work could involve adding more gestures and expanding the model training dataset to realize additional interactive functions, meeting diverse virtual teaching needs.
引用
收藏
页码:4213 / 4224
页数:12
相关论文
共 50 条
  • [21] Interactive Teaching in Virtual Environments: Integrating Hardware in the Loop in a Brewing Process
    Ortiz, Jessica S.
    Pila, Richard S.
    Yupangui, Joel A.
    Rosales, Marco M.
    APPLIED SCIENCES-BASEL, 2024, 14 (05):
  • [22] Gesture Recognition for Interactive Exercise Programs
    Perkins, Jedediah
    Pavel, Misha
    Jimison, Holly B.
    Scott, Susan
    2008 30TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY, VOLS 1-8, 2008, : 1915 - +
  • [23] Speech and gesture recognition interactive robot
    Bommi, R. M.
    Vijay, J.
    Manohar, V. Murali
    Kumar, J. P. Dinesh
    Sriram, D.
    MATERIALS TODAY-PROCEEDINGS, 2021, 47 : 37 - 40
  • [24] Facial gesture recognition for interactive applications
    Algorri, ME
    Escobar, A
    PROCEEDINGS OF THE FIFTH MEXICAN INTERNATIONAL CONFERENCE IN COMPUTER SCIENCE (ENC 2004), 2004, : 188 - 195
  • [25] Visual gesture interfaces for virtual environments
    O'Hagan, RG
    Zelinsky, A
    Rougeaux, S
    INTERACTING WITH COMPUTERS, 2002, 14 (03) : 231 - 250
  • [26] Research on Dynamic and Static Fusion Polymorphic Gesture Recognition Algorithm for Interactive Teaching Interface
    Feng, Zhiquan
    Xu, Tao
    Yang, Xiaohui
    Tian, Jinglan
    Yi, Jiangyan
    Zhao, Ke
    COGNITIVE SYSTEMS AND SIGNAL PROCESSING, PT II, 2019, 1006 : 104 - 115
  • [27] Assistive Design for Elderly Living Ambient using Voice and Gesture Recognition System
    Basanta, Haobijam
    Huang, Yo-Ping
    Lee, Tsu-Tian
    2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 840 - 845
  • [28] A model-based design process for interactive virtual environments
    Cuppens, Erwin
    Raymaekers, Chris
    Coninx, Karin
    INTERACTIVE SYSTEMS: DESIGN, SPECIFICATION, AND VERIFICATION, 2006, 3941 : 225 - +
  • [29] Gesture Recognition in Smartwatches Using LSTM for Interaction in Low-Cost Virtual Environments
    Silva, Leonardo
    Soares, Fabrizzio
    Felix, Juliana
    Cardoso, Luciana
    Aranha, Renan Vinicius
    Nascimento, Thamer Horbylon
    PROCEEDINGS OF 26TH SYMPOSIUM ON VIRTUAL AND AUGMENTED REALITY, SVR 2024, 2024, : 284 - 288
  • [30] Design an Interactive User Interface with Integration of Dynamic Gesture and Handwritten Numeral Recognition
    Sheu, Jia-Shing
    Huang, Guo-Shing
    Huang, Ya-Ling
    2014 INTERNATIONAL SYMPOSIUM ON COMPUTER, CONSUMER AND CONTROL (IS3C 2014), 2014, : 1295 - 1298