Human action recognition based on point context tensor shape descriptor

被引:1
作者
Li, Jianjun [1 ,2 ]
Mao, Xia [1 ]
Chen, Lijiang [1 ]
Wang, Lan [1 ]
机构
[1] Beihang Univ, Sch Elect & Informat Engn, Beijing, Peoples R China
[2] Inner Mongolia Univ Sci & Technol, Sch Elect & Informat Engn, Baotou, Peoples R China
基金
中国国家自然科学基金;
关键词
action recognition; tensor mode; dynamic time warping; tensor shape descriptor; view-invariant;
D O I
10.1117/1.JEI.26.4.043024
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Motion trajectory recognition is one of the most important means to determine the identity of a moving object. A compact and discriminative feature representation method can improve the trajectory recognition accuracy. This paper presents an efficient framework for action recognition using a three-dimensional skeleton kinematic joint model. First, we put forward a rotation-scale-translation-invariant shape descriptor based on point context (PC) and the normal vector of hypersurface to jointly characterize local motion and shape information. Meanwhile, an algorithm for extracting the key trajectory based on the confidence coefficient is proposed to reduce the randomness and computational complexity. Second, to decrease the eigenvalue decomposition time complexity, a tensor shape descriptor (TSD) based on PC that can globally capture the spatial layout and temporal order to preserve the spatial information of each frame is proposed. Then, a multilinear projection process is achieved by tensor dynamic time warping to map the TSD to a low-dimensional tensor subspace of the same size. Experimental results show that the proposed shape descriptor is effective and feasible, and the proposed approach obtains considerable performance improvement over the state-of-the-art approaches with respect to accuracy on a public action dataset. (C) 2017 SPIE and IS&T
引用
收藏
页数:10
相关论文
共 50 条
  • [31] View invariant human action recognition based on factorization and HMMs
    Li, Xi
    Fukui, Kazuhiro
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2008, E91D (07): : 1848 - 1854
  • [32] Efficient encoding of video descriptor distribution for action recognition
    Mehrin Saremi
    Farzin Yaghmaee
    Multimedia Tools and Applications, 2020, 79 : 6025 - 6043
  • [33] Learning principal orientations and residual descriptor for action recognition
    Chen, Lei
    Song, Zhanjie
    Lu, Jiwen
    Zhou, Jie
    PATTERN RECOGNITION, 2019, 86 (14-26) : 14 - 26
  • [34] Efficient encoding of video descriptor distribution for action recognition
    Saremi, Mehrin
    Yaghmaee, Farzin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (9-10) : 6025 - 6043
  • [35] Efficient descriptor tree growing for fast action recognition
    Ubalde, S.
    Goussies, N. A.
    Mejail, M. E.
    PATTERN RECOGNITION LETTERS, 2014, 36 : 213 - 220
  • [36] Space-Variant Descriptor Sampling for Action Recognition Based on Saliency and Eye Movements
    Vig, Eleonora
    Dorr, Michael
    Cox, David
    COMPUTER VISION - ECCV 2012, PT VII, 2012, 7578 : 84 - 97
  • [37] T-VLAD: Temporal vector of locally aggregated descriptor for multiview human action recognition
    Naeem, Hajra Binte
    Murtaza, Fiza
    Yousaf, Muhammad Haroon
    Velastin, Sergio A.
    PATTERN RECOGNITION LETTERS, 2021, 148 : 22 - 28
  • [38] Human action recognition based on action relevance weighted encoding
    Yi, Yang
    Li, Ao
    Zhou, Xiaofeng
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 80
  • [39] Human action recognition based on chaotic invariants
    Li-ming Xia
    Jin-xia Huang
    Lun-zheng Tan
    Journal of Central South University, 2013, 20 : 3171 - 3179
  • [40] Human action recognition based on scene semantics
    Tao Hu
    Xinyan Zhu
    Wei Guo
    Shaohua Wang
    Jianfeng Zhu
    Multimedia Tools and Applications, 2019, 78 : 28515 - 28536