Auxiliary criterion conversion via spatiotemporal semantic encoding and feature entropy for action recognition

被引:0
作者
Xiaoyan Meng
Guoliang Zhang
Songmin Jia
Xiuzhi Li
Xiangyin Zhang
机构
[1] Beijing University of Technology,Faculty of Information Technology
[2] Beijing Key Laboratory of Computational Intelligence and Intelligent System,Ministry of Education
[3] Engineering Research Center of Digital Community,undefined
来源
The Visual Computer | 2021年 / 37卷
关键词
Action recognition; Spatiotemporal semantic feature; Feature entropy; Bag-of-visual-words model; Text-based relevance analysis;
D O I
暂无
中图分类号
学科分类号
摘要
Video-based action recognition in realistic scenes is a core technology for human–computer interaction and smart surveillance. Although the trajectory features with the bag of visual words have confirmed promising performance, spatiotemporal interactive information cannot be effectively encoded which is valuable for classification. To address this issue, we propose a spatiotemporal semantic feature (ST-SF) and implement the conversion of it to the auxiliary criterion based on the information entropy theory. First, we present a text-based relevance analysis method to estimate the textual labels of objects most relevant to actions, which are employed to train the more targeted detectors based on the deep network. False detections are optimized by the inter-frame cooperativity and dynamic programming to construct the valid tubes. Then, we design the ST-SF to encode the interactive information, and the concept and calculation of feature entropy are defined based on the spatial distribution of ST-SFs on the training set. Finally, we achieve a two-stage classification strategy using the resulting decision gains. Experimental results on three publicly available datasets demonstrate that our method is robust and improves upon the state-of-the-art algorithms.
引用
收藏
页码:1673 / 1690
页数:17
相关论文
共 57 条
  • [1] Laptev I(2005)On space-time interest points Int. J. Comput. Vis. 64 107-123
  • [2] Wang H(2013)Dense trajectories and motion boundary descriptors for action recognition Int. J. Comput. Vis. 103 60-79
  • [3] Kläser A(2018)Motion keypoint trajectory and covariance descriptor for human action recognition Vis. Comput. 34 391-403
  • [4] Schmid C(2016)A robust and efficient video representation for action recognition Int. J. Comput. Vis. 119 219-238
  • [5] Liu CL(2015)Augmenting bag-of-words: a robust contextual representation of spatiotemporal interest points for action recognition Vis. Comput. 31 1383-1394
  • [6] Yi Y(2016)A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector Vis. Comput. 32 289-306
  • [7] Wang H(2017)Realistic action recognition with salient foreground trajectories Expert Syst. Appl. 75 44-55
  • [8] Wang H(2016)Bag of visual words and fusion methods for action recognition: comprehensive study and good practice Comput. Vis. Image Underst. 150 109-125
  • [9] Oneata D(2012)Aggregating local image descriptors into compact codes IEEE Trans. Pattern Anal. Mach. Intell. 34 1704-1716
  • [10] Verbeek J(2018)Long-term temporal convolutions for action recognition IEEE Trans. Pattern Anal. Mach. Intell. 40 1510-1517