Skeleton-based action recognition via spatial and temporal transformer networks

被引:250
作者
Plizzari, Chiara [1 ,2 ]
Cannici, Marco [1 ]
Matteucci, Matteo [1 ]
机构
[1] Politecn Milan, Via Giuseppe Ponzio 34-5, I-20133 Milan, Italy
[2] Politecn Torino, Corso Duca Abruzzi 24, I-10129 Turin, Italy
关键词
Representation learning; Graph CNN; Self-attention; 3D skeleton; Action recognition;
D O I
10.1016/j.cviu.2021.103219
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Skeleton-based Human Activity Recognition has achieved great interest in recent years as skeleton data has demonstrated being robust to illumination changes, body scales, dynamic camera views, and complex background. In particular, Spatial-Temporal Graph Convolutional Networks (ST-GCN) demonstrated to be effective in learning both spatial and temporal dependencies on non-Euclidean data such as skeleton graphs. Nevertheless, an effective encoding of the latent information underlying the 3D skeleton is still an open problem, especially when it comes to extracting effective information from joint motion patterns and their correlations. In this work, we propose a novel Spatial-Temporal Transformer network (ST-TR) which models dependencies between joints using the Transformer self-attention operator. In our ST-TR model, a Spatial Self Attention module (SSA) is used to understand intra-frame interactions between different body parts, and a Temporal Self-Attention module (TSA) to model inter-frame correlations. The two are combined in a two stream network, whose performance is evaluated on three large-scale datasets, NTU-RGB+D 60, NTU-RGB+D 120, and Kinetics Skeleton 400, consistently improving backbone results. Compared with methods that use the same input data, the proposed ST-TR achieves state-of-the-art performance on all datasets when using joints' coordinates as input, and results on-par with state-of-the-art when adding bones information.
引用
收藏
页数:10
相关论文
共 61 条
  • [1] Human Activity Analysis: A Review
    Aggarwal, J. K.
    Ryoo, M. S.
    [J]. ACM COMPUTING SURVEYS, 2011, 43 (03)
  • [2] [Anonymous], 2015, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2015.7299172
  • [3] [Anonymous], 2013, 23 INT JOINT C ART I
  • [4] [Anonymous], 2018, IEEE T IMAGE PROCESS, DOI DOI 10.1109/TIP.2017.2785279
  • [5] [Anonymous], 2020, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2019.2894422
  • [6] Attention Augmented Convolutional Networks
    Bello, Irwan
    Zoph, Barret
    Vaswani, Ashish
    Shlens, Jonathon
    Le, Quoc V.
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3285 - 3294
  • [7] Geometric Deep Learning Going beyond Euclidean data
    Bronstein, Michael M.
    Bruna, Joan
    LeCun, Yann
    Szlam, Arthur
    Vandergheynst, Pierre
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (04) : 18 - 42
  • [8] Bruna J., 2014, C TRACK P
  • [9] Adaptive Neural Sliding Mode Control for Singular Semi-Markovian Jump Systems Against Actuator Attacks
    Cao, Zhiru
    Niu, Yugang
    Zou, Yuanyuan
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2021, 51 (03): : 1523 - 1533
  • [10] Carion N., 2020, EUROPEAN C COMPUTER