Human action recognition in immersive virtual reality based on multi-scale spatio-temporal attention network
被引:0
|
作者:
Xiao, Zhiyong
论文数: 0引用数: 0
h-index: 0
机构:
Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R ChinaWuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
Xiao, Zhiyong
[1
]
Chen, Yukun
论文数: 0引用数: 0
h-index: 0
机构:
Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R ChinaWuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
Chen, Yukun
[1
]
Zhou, Xinlei
论文数: 0引用数: 0
h-index: 0
机构:
Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R ChinaWuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
Zhou, Xinlei
[1
]
He, Mingwei
论文数: 0引用数: 0
h-index: 0
机构:
Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore, SingaporeWuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
He, Mingwei
[2
]
Liu, Li
论文数: 0引用数: 0
h-index: 0
机构:
Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R ChinaWuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
Liu, Li
[1
]
Yu, Feng
论文数: 0引用数: 0
h-index: 0
机构:
Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore, Singapore
Engn Res Ctr Hubei Prov Clothing Informat, Wuhan, Peoples R ChinaWuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
Yu, Feng
[1
,2
,3
]
Jiang, Minghua
论文数: 0引用数: 0
h-index: 0
机构:
Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
Engn Res Ctr Hubei Prov Clothing Informat, Wuhan, Peoples R ChinaWuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
Jiang, Minghua
[1
,3
]
机构:
[1] Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
Wearable human action recognition (HAR) has practical applications in daily life. However, traditional HAR methods solely focus on identifying user movements, lacking interactivity and user engagement. This paper proposes a novel immersive HAR method called MovPosVR. Virtual reality (VR) technology is employed to create realistic scenes and enhance the user experience. To improve the accuracy of user action recognition in immersive HAR, a multi-scale spatio-temporal attention network (MSSTANet) is proposed. The network combines the convolutional residual squeeze and excitation (CRSE) module with the multi-branch convolution and long short-term memory (MCLSTM) module to extract spatio-temporal features and automatically select relevant features from action signals. Additionally, a multi-head attention with shared linear mechanism (MHASLM) module is designed to facilitate information interaction, further enhancing feature extraction and improving accuracy. The MSSTANet network achieves superior performance, with accuracy rates of 99.33% and 98.83% on the publicly available WISDM and PAMPA2 datasets, respectively, surpassing state-of-the-art networks. Our method showcases the potential to display user actions and position information in a virtual world, enriching user experiences and interactions across diverse application scenarios. image
机构:
Fujian Normal Univ, Coll Comp & Cyber Secur, Xuefu South Rd, Fuzhou 350117, Peoples R ChinaFujian Normal Univ, Coll Comp & Cyber Secur, Xuefu South Rd, Fuzhou 350117, Peoples R China
Wu, Jianning
Liu, Qianghui
论文数: 0引用数: 0
h-index: 0
机构:
Fujian Normal Univ, Coll Comp & Cyber Secur, Xuefu South Rd, Fuzhou 350117, Peoples R ChinaFujian Normal Univ, Coll Comp & Cyber Secur, Xuefu South Rd, Fuzhou 350117, Peoples R China