Learning Spatial and Temporal Extents of Human Actions for Action Detection

被引:38
|
作者
Zhou, Zhong [1 ]
Shi, Feng [1 ]
Wu, Wei [1 ]
机构
[1] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
关键词
Action localization; action recognition; discriminative latent variable model; split-and-merge; FRAMEWORK; MODELS;
D O I
10.1109/TMM.2015.2404779
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For the problem of action detection, most existing methods require that relevant portions of the action of interest in training videos have been manually annotated with bounding boxes. Some recent works tried to avoid tedious manual annotation, and proposed to automatically identify the relevant portions in training videos. However, these methods only concerned the identification in either spatial or temporal domain, and may get irrelevant contents from another domain. These irrelevant contents are usually undesirable in the training phase, which will lead to a degradation of the detection performance. This paper advances prior work by proposing a joint learning framework to simultaneously identify the spatial and temporal extents of the action of interest in training videos. To get pixel-level localization results, our method uses dense trajectories extracted from videos as local features to represent actions. We first present a trajectory split-and-merge algorithm to segment a video into the background and several separated foreground moving objects. In this algorithm, the inherent temporal smoothness of human actions is exploited to facilitate segmentation. Then, with the latent SVM framework on segmentation results, spatial and temporal extents of the action of interest are treated as latent variables that are inferred simultaneously with action recognition. Experiments on two challenging datasets show that action detection with our learned spatial and temporal extents is superior than state-of-the-art methods.
引用
收藏
页码:512 / 525
页数:14
相关论文
共 50 条
  • [1] Modelling a Framework to Obtain Violence Detection with Spatial-Temporal Action Localization
    Monteiro, Carlos
    Duraes, Dalila
    INFORMATION SYSTEMS AND TECHNOLOGIES, WORLDCIST 2022, VOL 1, 2022, 468 : 630 - 639
  • [2] HUMAN ACTION RECOGNITION VIA SPATIAL AND TEMPORAL METHODS
    Eroglu, Hulusi
    Gokce, C. Onur
    Ilk, H. Gokhan
    2014 22ND SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2014, : 104 - 107
  • [3] Temporal Action Detection Methods Based on Deep Learning
    Shen, Junyi
    Ma, Li
    Zhang, Jikai
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (03)
  • [4] Human action recognition via multi-task learning base on spatial-temporal feature
    Guo, Wenzhong
    Chen, Guolong
    INFORMATION SCIENCES, 2015, 320 : 418 - 428
  • [5] Extracting hierarchical spatial and temporal features for human action recognition
    Keting Zhang
    Liqing Zhang
    Multimedia Tools and Applications, 2018, 77 : 16053 - 16068
  • [6] Extracting hierarchical spatial and temporal features for human action recognition
    Zhang, Keting
    Zhang, Liqing
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (13) : 16053 - 16068
  • [7] Human Action Recognition Using Spatial and Temporal Sequences Alignment
    Li, Yandi
    Zhao, Zhihao
    SECOND INTERNATIONAL CONFERENCE ON OPTICS AND IMAGE PROCESSING (ICOIP 2022), 2022, 12328
  • [8] Action recognition and localization with spatial and temporal contexts
    Xu, Wanru
    Miao, Zhenjiang
    Yu, Jian
    Ji, Qiang
    NEUROCOMPUTING, 2019, 333 : 351 - 363
  • [9] Spatio-temporal action localization and detection for human recognition in big dataset
    Megrhi, Sameh
    Jmal, Marwa
    Souidene, Wided
    Beghdadi, Azeddine
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 41 : 375 - 390
  • [10] Recognizing human actions by fusing spatial-temporal HOG and key point histogram
    Hsieh, Chaur-Heh
    Hung, Mao-Hsiung
    Huang, Wei-Yang
    JOURNAL OF THE CHINESE INSTITUTE OF ENGINEERS, 2016, 39 (05) : 538 - 547