STV-based video feature processing for action recognition

被引:9
|
作者
Wang, Jing [1 ]
Xu, Zhijie [1 ]
机构
[1] Univ Huddersfield, Sch Comp & Engn, Huddersfield HD1 3DH, W Yorkshire, England
关键词
Video events; Spatio-temporal volume; 3D segmentation; Region intersection; Action recognition; HUMAN MOVEMENT; MOTION; VISUALIZATION; SEGMENTATION; DISTANCE; MODELS;
D O I
10.1016/j.sigpro.2012.06.009
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end. (C) 2012 Elsevier B.V. All rights reserved.
引用
收藏
页码:2151 / 2168
页数:18
相关论文
共 50 条
  • [21] Badminton video action recognition based on time network
    Zhi, Juncai
    Sun, Zijie
    Zhang, Ruijie
    Zhao, Zhouxiang
    JOURNAL OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING, 2023, 23 (05) : 2739 - 2752
  • [22] An overview of sparse representation based action recognition in video
    Ushapreethi, P.
    Lakshmipriya, G. G.
    2018 2ND INTERNATIONAL CONFERENCE ON COMPUTER, COMMUNICATION, AND SIGNAL PROCESSING (ICCCSP): SPECIAL FOCUS ON TECHNOLOGY AND INNOVATION FOR SMART ENVIRONMENT, 2018, : 63 - 67
  • [23] Video-based cattle identification and action recognition
    Chuong Nguyen
    Wang, Dadong
    Von Richter, Karl
    Valencia, Philip
    Alvarenga, Flavio A. P.
    Bishop-Hurley, Gregory
    2021 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA 2021), 2021, : 441 - 445
  • [24] ACTION RECOGNITION BASED ON KINEMATIC REPRESENTATION OF VIDEO DATA
    Sun, Xin
    Huang, Di
    Wang, Yunhong
    Qin, Jie
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 1530 - 1534
  • [25] Human-Body Action Recognition Based on Dense Trajectories and Video Saliency
    Gao Deyong
    Kang Zibing
    Wang Song
    Wang Yangping
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (24)
  • [26] A novel feature for action recognition
    Wen, Hao
    Lu, Zhe-Ming
    Cui, Jia-Lin
    Li, Hao-Lai
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (14) : 41441 - 41456
  • [27] Temporal sparse feature auto-combination deep network for video action recognition
    Wang, Qicong
    Gong, Dingxi
    Qi, Man
    Shen, Yehu
    Lei, Yunqi
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2018, 30 (23)
  • [28] Action recognition on continuous video
    Chang, Y. L.
    Chan, C. S.
    Remagnino, P.
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (04) : 1233 - 1243
  • [29] Action recognition on continuous video
    Y. L. Chang
    C. S. Chan
    P. Remagnino
    Neural Computing and Applications, 2021, 33 : 1233 - 1243
  • [30] A novel feature for action recognition
    Hao Wen
    Zhe-Ming Lu
    Jia-Lin Cui
    Hao-Lai Li
    Multimedia Tools and Applications, 2024, 83 : 41441 - 41456