Human Motion Gesture Recognition Algorithm in Video Based on Convolutional Neural Features of Training Images

被引:21
作者
Bu, Xiangui [1 ]
机构
[1] Shandong Sport Univ, Sch Sports & Phys Educ, Rizhao 276800, Peoples R China
关键词
Feature extraction; Gesture recognition; Convolutional neural networks; Training; Convolution; Machine learning; Video sequences; training images; human motion gesture recognition; classifier; feature vector; SHORT-TERM-MEMORY; NETWORKS;
D O I
10.1109/ACCESS.2020.3020141
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The main work of human motion gesture recognition is to recognize and analyze the behavior of human objects in the video. Although the current research in the field of human motion gesture recognition has achieved certain results, the human motion gesture recognition in real life scenes has great effects due to factors such as camera movement, target scale transformation, dynamic background, viewing angle, and illumination. This article first proposes a new method of constructing human motion posture features to describe human behavior. This method is based on deep convolutional neural network features and topic models. Experiments have verified that compared with the traditional feature map extracted from the convolutional neural network fully connected layer, the feature map extracted from the convolutional neural network convolutional layer is not only lower in dimension but also has higher discrimination. Secondly, based on the feature map of the convolutional neural network, the training map downsampling strategy is used to overcome the interference caused by the object's scale change and shape change. Finally, based on the basketball gesture recognition method, the behavior performance of the legs and arms in 9 basketball actions of walking, running, jumping, standing dribbling, walking dribbling, running dribbling, shooting, passing and receiving is analyzed. As well as the corresponding signal waveform characteristics, a two-stage data division method for basketball is proposed. The unit action data is extracted for analysis to realize feature extraction. In order to select the most suitable classifier for basketball gesture recognition, the constructed feature vector uses four Different classifiers are trained to construct different classifiers to realize the division of actions.
引用
收藏
页码:160025 / 160039
页数:15
相关论文
共 36 条
[1]  
[Anonymous], 2016, IEEE T PATTERN ANAL
[2]   Character Recognition in Air-Writing Based on Network of Radars for Human-Machine Interface [J].
Arsalan, Muhammad ;
Santra, Avik .
IEEE SENSORS JOURNAL, 2019, 19 (19) :8855-8864
[3]   You Learn by your Mistakes. Effective Training Strategies Based on the Analysis of Video-Recorded Worked-out Examples [J].
Cattaneo, Alberto A. P. ;
Boldrini, Elena .
VOCATIONS AND LEARNING, 2017, 10 (01) :1-26
[4]   Review of constraints on vision-based gesture recognition for human-computer interaction [J].
Chakraborty, Biplab Ketan ;
Sarma, Debajit ;
Bhuyan, M. K. ;
MacDorman, Karl F. .
IET COMPUTER VISION, 2018, 12 (01) :3-15
[5]   Hand Gesture Recognition Using Compact CNN via Surface Electromyography Signals [J].
Chen, Lin ;
Fu, Jianting ;
Wu, Yuheng ;
Li, Haochen ;
Zheng, Bin .
SENSORS, 2020, 20 (03)
[6]   The Change Detection of High Spatial Resolution Remotely Sensed Imagery Based on OB-HMAD Algorithm and Spectral Features [J].
Chen Qiang ;
Chen Yun-hao ;
Jiang Wei-guo .
SPECTROSCOPY AND SPECTRAL ANALYSIS, 2015, 35 (06) :1709-1714
[7]   Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening [J].
Cho, Heeryon ;
Yoon, Sang Min .
SENSORS, 2018, 18 (04)
[8]   Data Augmentation in Deep Learning-Based Fusion of Depth and Inertial Sensing for Action Recognition [J].
Dawar, Neha ;
Ostadabbas, Sarah ;
Kehtarnavaz, Nasser .
IEEE SENSORS LETTERS, 2019, 3 (01)
[9]  
Dong M., 2018, J Beijing Inst Fashion Technol (Nat Sci Ed), V38, P37
[10]   Dynamic gesture recognition by using CNNs and star RGB: A temporal information condensation [J].
dos Santos, Clebeson Canuto ;
Aching Samatelo, Jorge Leonid ;
Vassallo, Raquel Frizera .
NEUROCOMPUTING, 2020, 400 :238-254