Deep-Aligned Convolutional Neural Network for Skeleton-Based Action Recognition and Segmentation

被引:16
作者
Hosseini, Babak [1 ]
Montagne, Romain [2 ]
Hammer, Barbara [1 ]
机构
[1] Bielefeld Univ, Ctr Cognit Interact Technol CITEC, Bielefeld, Germany
[2] Eurodecision, Versailles, France
关键词
Convolutional neural networks; Action recognition; Time series alignment; Action segmentation; MOTION;
D O I
10.1007/s41019-020-00123-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks (CNNs) are deep learning frameworks which are well known for their notable performance in classification tasks. Hence, many skeleton-based action recognition and segmentation (SBARS) algorithms benefit from them in their designs. However, a shortcoming of such applications is the general lack of spatial relationships between the input features in such data types. Besides, non-uniform temporal scalings are a common issue in skeleton-based data streams which leads to having different input sizes even within one specific action category. In this work, we propose a novel deep-aligned convolutional neural network (DACNN) to tackle the above challenges for the particular problem of SBARS. Our network is designed by introducing a new type of filters in the context of CNNs which are trained based on their alignments to the local subsequences in the inputs. These filters result in efficient predictions as well as learning interpretable patterns in the data. Also, our DACNN framework can incrementally expand its deep structure based on the learning progress, which makes it flexible regarding different SBARS datasets. We empirically evaluate our framework on real-world benchmarks showing that the proposed DACNN algorithm obtains a competitive performance compared to the state of the art while benefiting from a less complicated yet more interpretable model.
引用
收藏
页码:126 / 139
页数:14
相关论文
共 54 条
[21]   VISUAL-PERCEPTION OF BIOLOGICAL MOTION AND A MODEL FOR ITS ANALYSIS [J].
JOHANSSON, G .
PERCEPTION & PSYCHOPHYSICS, 1973, 14 (02) :201-211
[22]   A New Representation of Skeleton Sequences for 3D Action Recognition [J].
Ke, Qiuhong ;
Bennamoun, Mohammed ;
An, Senjian ;
Sohel, Ferdous ;
Boussaid, Farid .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4570-4579
[23]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001
[24]   Efficient Unsupervised Temporal Segmentation of Motion Data [J].
Krueger, Bjoern ;
Voegele, Anna ;
Willig, Tobias ;
Yao, Angela ;
Klein, Reinhard ;
Weber, Andreas .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (04) :797-812
[25]  
Lafferty JD, 2001, P 18 INT C MACH LEAR, P282, DOI 10.5555/645530.655813
[26]  
Lample G., 2016, P NAACL HLT, P260, DOI DOI 10.18653/V1/N16-1030
[27]   Skeleton-Based Action Recognition Using Spatio-Temporal LSTM Network with Trust Gates [J].
Liu, Jun ;
Shahroudy, Amir ;
Xu, Dong ;
Kot, Alex C. ;
Wang, Gang .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) :3007-3021
[28]   Global Context-Aware Attention LSTM Networks for 3D Action Recognition [J].
Liu, Jun ;
Wang, Gang ;
Hu, Ping ;
Duan, Ling-Yu ;
Kot, Alex C. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :3671-3680
[29]   Enhanced skeleton visualization for view invariant human action recognition [J].
Liu, Mengyuan ;
Liu, Hong ;
Chen, Chen .
PATTERN RECOGNITION, 2017, 68 :346-362
[30]  
Long J., 2015, P IEEE C COMP VIS PA, P3431