Spatial-Temporal Graph Convolutional Framework for Yoga Action Recognition and Grading

被引:0
作者
Wang, Shu [1 ]
机构
[1] Inner Mongolia Minzu Univ, Sch Phys Educ, Tongliao 028000, Inner Mongolia, Peoples R China
关键词
FEATURES; BODY;
D O I
10.1155/2022/7500525
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
The rapid development of the Internet has changed our lives. Many people gradually like online video yoga teaching. However, yoga beginners cannot master the standard yoga poses just by learning through videos, and high yoga poses can bring great damage or even disability to the body if they are not standard. To address this problem, we propose a yoga action recognition and grading system based on spatial-temporal graph convolutional neural network. Firstly, we capture yoga movement data using a depth camera. Then we label the yoga exercise videos frame by frame using long short-term memory network; then we extract the skeletal joint point features sequentially using graph convolution; then we arrange each video frame from spatial-temporal dimension and correlate the joint points in each frame and neighboring frames with spatial-temporal information to obtain the connection between joints. Finally, the identified yoga movements are predicted and graded. Experiment proves that our method can accurately identify and classify yoga poses; it also can identify whether yoga poses are standard or not and give feedback to yogis in time to prevent body damage caused by nonstandard poses.
引用
收藏
页数:9
相关论文
共 47 条
[1]  
[Anonymous], 2020, YOGI TIMES 1228
[2]  
Bengtsson M., 2016, PLAN PERFORM QUALITA, V2, P8, DOI DOI 10.1016/J.NPLS.2016.01.001
[3]   Yoga and Female Objectification: Commodity and Exclusionary Identity in U.S. Women's Magazines [J].
Bhalla, Nandini ;
Moscowitz, David .
JOURNAL OF COMMUNICATION INQUIRY, 2020, 44 (01) :90-108
[4]   Yoga and eating disorder prevention and treatment: A comprehensive review and meta-analysis [J].
Borden, Ashlye ;
Cook-Cottone, Catherine .
EATING DISORDERS, 2020, 28 (04) :400-437
[5]   Skeleton-based action recognition with extreme learning machines [J].
Chen, Xi ;
Koskela, Markus .
NEUROCOMPUTING, 2015, 149 :387-396
[6]  
Cohen I, 2003, IEEE INTERNATIONAL WORKSHOP ON ANALYSIS AND MODELING OF FACE AND GESTURES, P74
[7]   A conceptual model describing mechanisms for how yoga practice may support positive embodiment [J].
Cox, Anne E. ;
Tylka, Tracy L. .
EATING DISORDERS, 2020, 28 (04) :376-399
[8]   RPAN: An End-to-End Recurrent Pose-Attention Network for Action Recognition in Videos [J].
Du, Wenbin ;
Wang, Yali ;
Qiao, Yu .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3745-3754
[9]   Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness [J].
Graneheim, UH ;
Lundman, B .
NURSE EDUCATION TODAY, 2004, 24 (02) :105-112
[10]  
Grauman K, 2003, NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS I AND II, PROCEEDINGS, P641