Research on athlete’s wrong movement prediction method based on multimodal eye movement recognition

被引:0
作者
Wang L. [1 ]
机构
[1] Shangqiu Polytechnic, Shangqiu
关键词
action mining; feature extraction; multimodal eye movement recognition; support vector machine; wrong movement;
D O I
10.1504/ijris.2022.126658
中图分类号
学科分类号
摘要
In order to solve the problems of large prediction error, long time consumption and large amount of interference data in the prediction results of traditional methods, an athlete’s wrong movement prediction method based on multimodal eye movement recognition is proposed. Firstly, the spectral clustering algorithm is used to mine the wrong movements. Secondly, the least square method is used to improve the support vector machine, and the improved support vector machine is used to classify athletes’ wrong movements according to the statistical characteristics of athletes’ wrong movements. Finally, based on the classification results and the historical data of athletes’ wrong movements, the trend of athletes’ wrong movements is predicted by the multimodal eye movement recognition method to complete the prediction of athletes’ wrong movements. The experimental results show that the method causes small prediction error, consumes short prediction time and has a low proportion of interference data. Copyright © 2022 Inderscience Enterprises Ltd.
引用
收藏
页码:176 / 183
页数:7
相关论文
共 18 条
[1]  
Bulbul M.F., Tabussum S., Ali H., Exploring 3D human action recognition using STACOG on multi-view depth motion maps sequences, Sensors, 21, 11, pp. 3642-3653, (2021)
[2]  
Chen J., Kong J., Sun H., Spatiotemporal interaction residual networks with pseudo3d for video action recognition, Sensors, 20, 11, pp. 3126-3137, (2020)
[3]  
Climent-Perez P., Florez-Revuelta F., Improved action recognition with separable spatio-temporal attention using alternative skeletal and video pre-processing, Sensors, 21, 3, pp. 1005-1026, (2021)
[4]  
Cui Q., Sun H., Kong Y., Zhang X., Li Y., Efficient human motion prediction using temporal convolutional generative adversarial network, Information Sciences, 545, 2021, pp. 427-447, (2021)
[5]  
Ding Z., Yang C., Wang Z., Online adaptive prediction of human motion intention based on sEMG, Sensors, 21, 8, pp. 2882-2893, (2021)
[6]  
Giarmatzis G., Zacharaki E.I., Moustakas K., Real-time prediction of joint forces by motion capture and machine learning, Sensors, 20, 23, pp. 6933-6948, (2020)
[7]  
Lei Q., Du J.X., Zhang H.B., A survey of vision-based human action evaluation methods, Sensors, 19, 19, pp. 4129-4135, (2019)
[8]  
Li Y., Xu X., Xu J., Bilayer model for cross-view human action recognition based on transfer learning, Journal of Electronic Imaging, 28, 3, pp. 1-10, (2019)
[9]  
Little K., Pappachan B.K., Yang S., elbow motion trajectory prediction using a multi-modal wearable system: a comparative analysis of machine learning techniques, Sensors, 21, 2, pp. 498-513, (2021)
[10]  
Liu Q., Chen E., Gao L., Energy-guided temporal segmentation network for multimodal human action recognition, Sensors, 20, 17, pp. 4673-3685, (2020)