Learning joints relation graphs for video action recognition

被引:0
作者
Liu, Xiaodong [1 ]
Xu, Huating [1 ]
Wang, Miao [1 ]
机构
[1] School of Software, Henan Institute of Engineering, Zhengzhou, China
关键词
Deep learning;
D O I
暂无
中图分类号
TB18 [人体工程学]; Q98 [人类学];
学科分类号
030303 ; 1201 ;
摘要
Previous video action recognition mainly focuses on extracting spatial and temporal features from videos or capturing physical dependencies among joints. The relation between joints is often ignored. Modeling the relation between joints is important for action recognition. Aiming at learning discriminative relation between joints, this paper proposes a joint spatial-temporal reasoning (JSTR) framework to recognize action from videos. For the spatial representation, a joints spatial relation graph is built to capture position relations between joints. For the temporal representation, temporal information of body joints is modeled by the intra-joint temporal relation graph. The spatial reasoning feature and the temporal reasoning feature are fused to recognize action from videos. The effectiveness of our method is demonstrated in three real-world video action recognition datasets. The experiment results display good performance across all of these datasets. Copyright © 2022 Liu, Xu and Wang.
引用
收藏
相关论文
empty
未找到相关数据