SpatioTemporal focus for skeleton-based action recognition

被引:58
作者
Wu, Liyu [1 ]
Zhang, Can [2 ]
Zou, Yuexian [1 ,3 ]
机构
[1] Peking Univ, Sch ECE, ADSPLAB, Shenzhen, Peoples R China
[2] Tencent Media Lab, Shenzhen, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
关键词
Action recognition; Skeleton topology; Graph convolutional network;
D O I
10.1016/j.patcog.2022.109231
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph convolutional networks (GCNs) are widely adopted in skeleton-based action recognition due to their powerful ability to model data topology. We argue that the performance of recent proposed skeleton-based action recognition methods is limited by the following factors. First, the predefined graph structures are shared throughout the network, lacking the flexibility and capacity to model the multigrain semantic information. Second, the relations among the global joints are not fully exploited by the graph local convolution, which may lose the implicit joint relevance. For instance, actions such as running and waving are performed by the co-movement of body parts and joints, e.g. , legs and arms, however, they are located far away in physical connection. Inspired by the recent attention mechanism, we propose a multi-grain contextual focus module, termed MCF, to capture the action associated relation information from the body joints and parts. As a result, more explainable representations for different skeleton action sequences can be obtained by MCF. In this study, we follow the common practice that the dense sample strategy of the input skeleton sequences is adopted and this brings much redundancy since number of instances has nothing to do with actions. To reduce the redundancy, a temporal discrimination focus module, termed TDF, is developed to capture the local sensitive points of the temporal dynamics. MCF and TDF are integrated into the standard GCN network to form a unified architecture, named STF-Net. It is noted that STF-Net provides the capability to capture robust movement patterns from these skeleton topology structures, based on multi-grain context aggregation and temporal dependency. Extensive experimental results show that our STF-Net significantly achieves state-of-the-art results on three challenging benchmarks NTU-RGB+D 60, NTU-RGB+D 120, and Kinetics-Skeleton. (c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 40 条
[1]   Enhanced discriminative graph convolutional network with adaptive temporal modelling for skeleton-based action recognition [J].
Alsarhan, Tamam ;
Ali, Usman ;
Lu, Hongtao .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 216
[2]   SkeleMotion: A New Representation of Skeleton Joint Sequences Based on Motion Information for 3D Action Recognition [J].
Caetano, Carlos ;
Sena, Jessica ;
Bremond, Francois ;
dos Santos, Jefersson A. ;
Schwartz, William Robson .
2019 16TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), 2019,
[3]  
Cao Z., 2017, P IEEE C COMP VIS PA, P7291
[4]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[5]   Skeleton-Based Action Recognition with Shift Graph Convolutional Network [J].
Cheng, Ke ;
Zhang, Yifan ;
He, Xiangyu ;
Chen, Weihan ;
Cheng, Jian ;
Lu, Hanqing .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :180-189
[6]  
Defferrard M, 2016, ADV NEUR IN, V29
[7]   Normalize d e dge convolutional networks for skeleton-based hand gesture recognition [J].
Guo, Fangtai ;
He, Zaixing ;
Zhang, Shuyou ;
Zhao, Xinyue ;
Fang, Jinhui ;
Tan, Jianrong .
PATTERN RECOGNITION, 2021, 118
[8]  
Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
[9]  
Huang LJ, 2020, AAAI CONF ARTIF INTE, V34, P11045
[10]   Actional-Structural Graph Convolutional Networks for Skeleton-based Action Recognition [J].
Li, Maosen ;
Chen, Siheng ;
Chen, Xu ;
Zhang, Ya ;
Wang, Yanfeng ;
Tian, Qi .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3590-3598