Human action recognition based on point context tensor shape descriptor

被引:1
|
作者
Li, Jianjun [1 ,2 ]
Mao, Xia [1 ]
Chen, Lijiang [1 ]
Wang, Lan [1 ]
机构
[1] Beihang Univ, Sch Elect & Informat Engn, Beijing, Peoples R China
[2] Inner Mongolia Univ Sci & Technol, Sch Elect & Informat Engn, Baotou, Peoples R China
基金
中国国家自然科学基金;
关键词
action recognition; tensor mode; dynamic time warping; tensor shape descriptor; view-invariant;
D O I
10.1117/1.JEI.26.4.043024
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Motion trajectory recognition is one of the most important means to determine the identity of a moving object. A compact and discriminative feature representation method can improve the trajectory recognition accuracy. This paper presents an efficient framework for action recognition using a three-dimensional skeleton kinematic joint model. First, we put forward a rotation-scale-translation-invariant shape descriptor based on point context (PC) and the normal vector of hypersurface to jointly characterize local motion and shape information. Meanwhile, an algorithm for extracting the key trajectory based on the confidence coefficient is proposed to reduce the randomness and computational complexity. Second, to decrease the eigenvalue decomposition time complexity, a tensor shape descriptor (TSD) based on PC that can globally capture the spatial layout and temporal order to preserve the spatial information of each frame is proposed. Then, a multilinear projection process is achieved by tensor dynamic time warping to map the TSD to a low-dimensional tensor subspace of the same size. Experimental results show that the proposed shape descriptor is effective and feasible, and the proposed approach obtains considerable performance improvement over the state-of-the-art approaches with respect to accuracy on a public action dataset. (C) 2017 SPIE and IS&T
引用
收藏
页数:10
相关论文
共 50 条
  • [11] Hierarchical Gaussian descriptor based on local pooling for action recognition
    Xuan Son Nguyen
    Abdel-Illah Mouaddib
    Thanh Phuong Nguyen
    Machine Vision and Applications, 2019, 30 : 321 - 343
  • [12] A new invariant descriptor for action recognition based on spherical harmonics
    Parvin Razzaghi
    Maziar Palhang
    Niloofar Gheissari
    Pattern Analysis and Applications, 2013, 16 : 507 - 518
  • [13] A new invariant descriptor for action recognition based on spherical harmonics
    Razzaghi, Parvin
    Palhang, Maziar
    Gheissari, Niloofar
    PATTERN ANALYSIS AND APPLICATIONS, 2013, 16 (04) : 507 - 518
  • [14] Hierarchical Gaussian descriptor based on local pooling for action recognition
    Nguyen, Xuan Son
    Mouaddib, Abdel-Illah
    Thanh Phuong Nguyen
    MACHINE VISION AND APPLICATIONS, 2019, 30 (02) : 321 - 343
  • [15] On Importance of Interactions and Context in Human Action Recognition
    Shapovalova, Nataliya
    Gong, Wenjuan
    Pedersoli, Marco
    Xavier Roca, Francesc
    Gonzalez, Jordi
    PATTERN RECOGNITION AND IMAGE ANALYSIS: 5TH IBERIAN CONFERENCE, IBPRIA 2011, 2011, 6669 : 58 - 66
  • [16] A local descriptor based on Laplacian pyramid coding for action recognition
    Zhen, Xiantong
    Shao, Ling
    PATTERN RECOGNITION LETTERS, 2013, 34 (15) : 1899 - 1905
  • [17] Criminal action recognition using spatiotemporal human motion acceleration descriptor
    Mir, Abinta Mehmood
    Yousaf, Muhammad Haroon
    Dawood, Hassan
    JOURNAL OF ELECTRONIC IMAGING, 2018, 27 (06)
  • [18] Histogram of oriented rectangles: A new pose descriptor for human action recognition
    Ikizler, Nazli
    Duygulu, Pinar
    IMAGE AND VISION COMPUTING, 2009, 27 (10) : 1515 - 1526
  • [19] Tensor discriminant analysis on grassmann manifold with application to video based human action recognition
    Ozdemir, Cagri
    Hoover, Randy C.
    Caudle, Kyle
    Braman, Karen
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (08) : 3353 - 3365
  • [20] Tensor Representations for Action Recognition
    Koniusz, Piotr
    Wang, Lei
    Cherian, Anoop
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (02) : 648 - 665