Skeleton-based action recognition via spatial and temporal transformer networks

被引:250
作者
Plizzari, Chiara [1 ,2 ]
Cannici, Marco [1 ]
Matteucci, Matteo [1 ]
机构
[1] Politecn Milan, Via Giuseppe Ponzio 34-5, I-20133 Milan, Italy
[2] Politecn Torino, Corso Duca Abruzzi 24, I-10129 Turin, Italy
关键词
Representation learning; Graph CNN; Self-attention; 3D skeleton; Action recognition;
D O I
10.1016/j.cviu.2021.103219
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Skeleton-based Human Activity Recognition has achieved great interest in recent years as skeleton data has demonstrated being robust to illumination changes, body scales, dynamic camera views, and complex background. In particular, Spatial-Temporal Graph Convolutional Networks (ST-GCN) demonstrated to be effective in learning both spatial and temporal dependencies on non-Euclidean data such as skeleton graphs. Nevertheless, an effective encoding of the latent information underlying the 3D skeleton is still an open problem, especially when it comes to extracting effective information from joint motion patterns and their correlations. In this work, we propose a novel Spatial-Temporal Transformer network (ST-TR) which models dependencies between joints using the Transformer self-attention operator. In our ST-TR model, a Spatial Self Attention module (SSA) is used to understand intra-frame interactions between different body parts, and a Temporal Self-Attention module (TSA) to model inter-frame correlations. The two are combined in a two stream network, whose performance is evaluated on three large-scale datasets, NTU-RGB+D 60, NTU-RGB+D 120, and Kinetics Skeleton 400, consistently improving backbone results. Compared with methods that use the same input data, the proposed ST-TR achieves state-of-the-art performance on all datasets when using joints' coordinates as input, and results on-par with state-of-the-art when adding bones information.
引用
收藏
页数:10
相关论文
共 61 条
  • [31] Global Context-Aware Attention LSTM Networks for 3D Action Recognition
    Liu, Jun
    Wang, Gang
    Hu, Ping
    Duan, Ling-Yu
    Kot, Alex C.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3671 - 3680
  • [32] Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition
    Liu, Jun
    Shahroudy, Amir
    Xu, Dong
    Wang, Gang
    [J]. COMPUTER VISION - ECCV 2016, PT III, 2016, 9907 : 816 - 833
  • [33] Recognizing Human Actions as the Evolution of Pose Estimation Maps
    Liu, Mengyuan
    Yuan, Junsong
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1159 - 1168
  • [34] Enhanced skeleton visualization for view invariant human action recognition
    Liu, Mengyuan
    Liu, Hong
    Chen, Chen
    [J]. PATTERN RECOGNITION, 2017, 68 : 346 - 362
  • [35] Liu Z., 2020, P IEEECVF C COMPUTER, P143
  • [36] Neural Network for Graphs: A Contextual Constructive Approach
    Micheli, Alessio
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (03): : 498 - 511
  • [37] Nguyen T.Q., 2019, ARXIV191005895
  • [38] Niepert M, 2016, PR MACH LEARN RES, V48
  • [39] Parmar N., 2018, PR MACH LEARN RES, P4055
  • [40] Paszke A, 2019, ADV NEUR IN, V32