Spatio-Temporal Laplacian Pyramid Coding for Action Recognition

被引:191
|
作者
Shao, Ling [1 ,2 ]
Zhen, Xiantong [2 ]
Tao, Dacheng [3 ,4 ]
Li, Xuelong [5 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Coll Elect & Informat Engn, Nanjing 210044, Jiangsu, Peoples R China
[2] Univ Sheffield, Dept Elect & Elect Engn, Sheffield S1 3JD, S Yorkshire, England
[3] Univ Technol Sydney, Ctr Quantum Computat & Intelligent Syst, Ultimo, NSW 2007, Australia
[4] Univ Technol Sydney, Fac Engn & Informat Technol, Ultimo, NSW 2007, Australia
[5] Chinese Acad Sci, Xian Inst Opt & Precis Mech, State Key Lab Transient Opt & Photon, Ctr OPT IMagery Anal & Learning, Xian 710119, Peoples R China
基金
中国国家自然科学基金;
关键词
Action recognition; computer vision; max pooling; spatio-temporal Laplacian pyramid; FEATURES; CONTEXT; MODEL;
D O I
10.1109/TCYB.2013.2273174
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present a novel descriptor, called spatio-temporal Laplacian pyramid coding (STLPC), for holistic representation of human actions. In contrast to sparse representations based on detected local interest points, STLPC regards a video sequence as a whole with spatio-temporal features directly extracted from it, which prevents the loss of information in sparse representations. Through decomposing each sequence into a set of band-pass-filtered components, the proposed pyramid model localizes features residing at different scales, and therefore is able to effectively encode the motion information of actions. To make features further invariant and resistant to distortions as well as noise, a bank of 3-D Gabor filters is applied to each level of the Laplacian pyramid, followed by max pooling within filter bands and over spatio-temporal neighborhoods. Since the convolving and pooling are performed spatio-temporally, the coding model can capture structural and motion information simultaneously and provide an informative representation of actions. The proposed method achieves superb recognition rates on the KTH, the multiview IXMAS, the challenging UCF Sports, and the newly released HMDB51 datasets. It outperforms state of the art methods showing its great potential on action recognition.
引用
收藏
页码:817 / 827
页数:11
相关论文
共 50 条
  • [1] A local descriptor based on Laplacian pyramid coding for action recognition
    Zhen, Xiantong
    Shao, Ling
    PATTERN RECOGNITION LETTERS, 2013, 34 (15) : 1899 - 1905
  • [2] Video Action Recognition Based on Spatio-temporal Feature Pyramid Module
    Gong, Suming
    Chen, Ying
    2020 13TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID 2020), 2020, : 338 - 341
  • [3] Spatio-temporal Cuboid Pyramid for Action Recognition using Depth Motion Sequences
    Ji, Xiaopeng
    Cheng, Jun
    Feng, Wei
    2016 EIGHTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2016, : 208 - 213
  • [4] SPATIO-TEMPORAL PYRAMID CUBOID MATCHING FOR ACTION RECOGNITION USING DEPTH MAPS
    Liang, Bin
    Zheng, Lihong
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 2070 - 2074
  • [5] Action recognition by spatio-temporal oriented energies
    Zhen, Xiantong
    Shao, Ling
    Li, Xuelong
    INFORMATION SCIENCES, 2014, 281 : 295 - 309
  • [6] Spatio-temporal information for human action recognition
    Yao, Li
    Liu, Yunjian
    Huang, Shihui
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2016,
  • [7] LEARNING SPATIO-TEMPORAL DEPENDENCIES FOR ACTION RECOGNITION
    Cai, Qiao
    Yin, Yafeng
    Man, Hong
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 3740 - 3744
  • [8] Spatio-Temporal Fusion Networks for Action Recognition
    Cho, Sangwoo
    Foroosh, Hassan
    COMPUTER VISION - ACCV 2018, PT I, 2019, 11361 : 347 - 364
  • [9] A unified spatio-temporal human body region tracking approach to action recognition
    Al Harbi, Nouf
    Gotoh, Yoshihiko
    NEUROCOMPUTING, 2015, 161 : 56 - 64
  • [10] Hierarchical and Spatio-Temporal Sparse Representation for Human Action Recognition
    Tian, Yi
    Kong, Yu
    Ruan, Qiuqi
    An, Gaoyun
    Fu, Yun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) : 1748 - 1762