Detecting Human Action as the Spatio-Temporal Tube of Maximum Mutual Information

被引:18
|
作者
Wang, Taiqing [1 ,2 ]
Wang, Shengjin [1 ,2 ]
Ding, Xiaoqing [1 ,2 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Tsinghua Natl Lab Informat Sci & Technol, Beijing 100084, Peoples R China
基金
国家高技术研究发展计划(863计划); 中国国家自然科学基金;
关键词
Action detection; feature trajectory; mutual information; spatio-temporal cuboid (ST-cuboid); spatio-temporal tube (ST-tube); RECOGNITION; MOTION; DENSE;
D O I
10.1109/TCSVT.2013.2276856
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Human action detection in complex scenes is a challenging problem due to its high-dimensional search space and dynamic backgrounds. To achieve efficient and accurate action detection, we represent a video sequence as a collection of feature trajectories and model human action as the spatio-temporal tube (ST-tube) of maximum mutual information. First, a random forest is built to evaluate the mutual information of feature trajectories toward the action class, and then a one-order Markov model is introduced to recursively infer the action regions at consecutive frames. By exploring the time-continuity property of feature trajectories, the action region is efficiently inferred at large temporal intervals. Finally, we obtain an ST-tube by concatenating the consecutive action regions bounding the human bodies. Compared with the popular spatio-temporal cuboid action model, the proposed ST-tube model is not only more efficient, but also more accurate in action localization. Experimental results on the KTH, CMU and UCF sports datasets validate the superiority of our approach over the state-of-the-art methods in both localization accuracy and time efficiency.
引用
收藏
页码:277 / 290
页数:14
相关论文
共 50 条
  • [1] Spatio-Temporal Information Fusion and Filtration for Human Action Recognition
    Zhang, Man
    Li, Xing
    Wu, Qianhan
    SYMMETRY-BASEL, 2023, 15 (12):
  • [2] Quantifying human sensitivity to spatio-temporal information in dynamic faces
    Dobs, Katharina
    Bulthoff, Isabelle
    Breidt, Martin
    Vuong, Quoc C.
    Curio, Cristobal
    Schueltz, Johannes
    VISION RESEARCH, 2014, 100 : 78 - 87
  • [3] Model term selection for spatio-temporal system identification using mutual information
    Wang, Shu
    Wei, Hua-Liang
    Coca, Daniel
    Billings, Stephen A.
    INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 2013, 44 (02) : 223 - 231
  • [4] Hierarchical and Spatio-Temporal Sparse Representation for Human Action Recognition
    Tian, Yi
    Kong, Yu
    Ruan, Qiuqi
    An, Gaoyun
    Fu, Yun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) : 1748 - 1762
  • [5] SPATIO-TEMPORAL PYRAMIDAL ACCORDION REPRESENTATION FOR HUMAN ACTION RECOGNITION
    Sekma, Manel
    Mejdoub, Mahmoud
    Ben Amar, Chokri
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [6] SPATIO-TEMPORAL CO-OCCURRENCE CHARACTERIZATIONS FOR HUMAN ACTION CLASSIFICATION
    Sabri, Aznul Qalid Md
    Boonaert, Jacques
    Abdullah, Erma Rahayu Mohd Faizal
    Mansoor, Ali Mohammed
    MALAYSIAN JOURNAL OF COMPUTER SCIENCE, 2017, 30 (03) : 154 - 173
  • [7] Spatio-Temporal Analysis for Human Action Detection and Recognition in Uncontrolled Environments
    Liu, Dianting
    Yan, Yilin
    Shyu, Mei-Ling
    Zhao, Guiru
    Chen, Min
    INTERNATIONAL JOURNAL OF MULTIMEDIA DATA ENGINEERING & MANAGEMENT, 2015, 6 (01) : 1 - 18
  • [8] Spatio-temporal action localization and detection for human recognition in big dataset
    Megrhi, Sameh
    Jmal, Marwa
    Souidene, Wided
    Beghdadi, Azeddine
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 41 : 375 - 390
  • [9] Multimodal human action recognition based on spatio-temporal action representation recognition model
    Wu, Qianhan
    Huang, Qian
    Li, Xing
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (11) : 16409 - 16430
  • [10] Online Spatio-temporal Action Detection for Eldercare
    Koh, Thean Chun
    Yeo, Chai Kiat
    Jing, Xuan
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 126 - 127