Action recognition through fusion of sEMG and skeletal data in feature level

被引:2
作者
Wang X. [1 ]
Ding W. [1 ]
Bian S. [1 ]
Liu H. [2 ]
机构
[1] Department of Automation, Institute of Electrical Engineering, Key Laboratory of Intelligent Rehabilitation and Neuromodulation of Hebei Province, Yanshan University, 438 West of Hebei Avenue, Haigang District, Qinhuangdao
[2] School of Mechanical Engineering and Automation, State Key Laboratory of Robotics and Systems, Harbin Institute of Technology Shenzhen, Nanshan District, Shenzhen
基金
中国国家自然科学基金;
关键词
Action recognition; Feature extraction; Multimodal fusion;
D O I
10.1007/s12652-022-03867-0
中图分类号
学科分类号
摘要
Human action can be recognized through a unimodal way. However, the information obtained from a single mode is limited due to the fact that a single mode contains only one type of physical attribute. Therefore, it is motivational to improve the accuracy of actions through fusion of two different complementary modality, which are the surface electromyography (sEMG) and the skeletal data. In this paper, we propose a general framework of fusion of sEMG signals and skeletal data. Firstly, vector of locally aggregated descriptor (VLAD) was extracted from sEMG sequences and skeletal sequences, respectively. Secondly, features obtained from sEMG and skeletal data are mapped through different weighted kernels using multiple kernel learning. Finally, the classification results are obtained through the model of multiple kernel learning. A dataset of 18 types of human actions is collected via KinectV2 and Thalmic Myo armband to verify our ideas. The experimental results show that the accuracy of human action recognition are improved by combining skeletal data with sEMG signals. © 2022, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
引用
收藏
页码:4125 / 4134
页数:9
相关论文
共 24 条
[11]  
Liu J., Shahroudy A., Xu D., Kot A.C., Wang G., Skeleton-based action recognition using spatio-temporal LSTM network with trust gates, IEEE Trans Pattern Anal Mach Intell, 40, 12, pp. 3007-3021, (2018)
[12]  
Luvizon D.C., Tabia H., Picard D., Learning features combination for human action recognition from skeleton sequences, Pattern Recognit Lett, 99, pp. 13-20, (2017)
[13]  
Lopez-Nava I.H., Munoz-Melendez A., Complex human action recognition on daily living environments using wearable inertial sensors, ACM, (2016)
[14]  
Mahbub U., Imtiaz H., Rahman Ahad M.A., An optical flow based approach for action recognition, 14Th International Conference on Computer and Information Technology, pp. 646-651, (2011)
[15]  
Sonnenburg S., Ratsch G., Schafer C., Scholkopf B., Large scale multiple kernel learning, J Mach Learn Res, 7, 2006, pp. 1531-1565, (2006)
[16]  
Sonnenburg S., Strathmann H., Shogun-Toolbox/Shogun: Shogun 6.1.0, (2017)
[17]  
Sun Y., Li C., Et al., Gesture recognition based on kinect and SEMG signal fusion, Mobile Netw Appl, 23, 4, pp. 797-805, (2018)
[18]  
Vrigkas M., Nikou C., Kakadiaris I.A., A review of human activity recognition methods, Front Robot AI, 2, (2015)
[19]  
Wei H., Jafari R., Kehtarnavaz N., Fusion of video and inertial sensing for deep learning-based human action recognition, Sensors, 19, 17, (2019)
[20]  
Xia L., Chenj C., View invariant human action recognition using histograms of 3D joints, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 20-27, (2012)