Energy-Based Periodicity Mining With Deep Features for Action Repetition Counting in Unconstrained Videos

被引:7
作者
Yin, Jianqin [1 ]
Wu, Yanchun [1 ]
Zhu, Chaoran [2 ]
Yin, Zijin [2 ]
Liu, Huaping [3 ]
Dang, Yonghao [1 ]
Liu, Zhiyi [1 ]
Liu, Jun [4 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
[2] Beijing Univ Posts & Telecommun, Int Sch, Beijing 100876, Peoples R China
[3] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
[4] City Univ Hong Kong, Dept Mech Engn, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Videos; Feature extraction; Principal component analysis; Motion segmentation; Market research; Task analysis; Noise measurement; Action repetition counting; deep ConvNets; MOTION DETECTION; SEGMENTATION;
D O I
10.1109/TCSVT.2021.3055220
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Action repetition counting is to estimate the occurrence times of the repetitive motion in one action, which is a relatively new, significant, but challenging problem. To solve this problem, we propose a new method superior to the traditional ways in two aspects, without preprocessing and applicable for arbitrary periodicity actions. Without preprocessing, the proposed model makes our scheme convenient for real applications; processing the arbitrary periodicity action makes our model more suitable for the actual circumstance. In terms of methodology, firstly, we extract action features using ConvNets and then use Principal Component Analysis algorithm to generate the intuitive periodic information from the chaotic high-dimensional features; secondly, we propose an energy-based adaptive feature mode selection scheme to adaptively select proper deep feature mode according to the background of the video; thirdly,we construct the periodic waveform of the action based on the high-energy rules by filtering the irrelevant information. Finally, we detect the peaks to obtain the times of the action repetition. Our work features two-fold: 1) We give a significant insight that features extracted by ConvNets for action recognition can well model the self-similarity periodicity of the repetitive action. 2) A high-energy based periodicity mining rule using features from ConvNets is presented, which can process arbitrary actions without preprocessing. Experimental results show that our method achieves superior or comparable performance on the three benchmark datasets, i.e. YT_Segments, QUVA, and RARV.
引用
收藏
页码:4812 / 4825
页数:14
相关论文
共 12 条
  • [1] Happy Emotion Recognition From Unconstrained Videos Using 3D Hybrid Deep Features
    Samadiani, Najmeh
    Huang, Guangyan
    Hu, Yu
    Li, Xiaowei
    IEEE ACCESS, 2021, 9 : 35524 - 35538
  • [2] Deep Learning-Based Action Detection in Untrimmed Videos: A Survey
    Vahdani, Elahe
    Tian, Yingli
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4302 - 4320
  • [3] Research on Repetition Counting Method Based on Complex Action Label String
    Yang, Fanghong
    Wang, Gao
    Li, Deping
    Liu, Ning
    Min, Feiyan
    MACHINES, 2022, 10 (06)
  • [4] Predicting Actions in Videos and Action-Based Segmentation Using Deep Learning
    Memon, Fayaz A.
    Khan, Umair A.
    Shaikh, Asadullah
    Alghamdi, Abdullah
    Kumar, Pardeep
    Alrizq, Mesfer
    IEEE ACCESS, 2021, 9 : 106918 - 106932
  • [5] Deep Learning-Based Human Action Recognition in Videos
    Li, Song
    Shi, Qian
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2025, 34 (01)
  • [6] Physical Features and Deep Learning-based Appearance Features for Vehicle Classification from Rear View Videos
    Theagarajan, Rajkumar
    Thakoor, Ninad S.
    Bhanu, Bir
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (03) : 1096 - 1108
  • [7] Deep Learning Networks-Based Action Videos Classification and Search
    Wang, Wenshi
    Huang, Zhangqin
    Tian, Rui
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2021, 35 (07)
  • [8] Energy-Based Temporal Summarized Attentive Network for Zero-Shot Action Recognition
    Qi, Cheng
    Feng, Zhiyong
    Xing, Meng
    Su, Yong
    Zheng, Jinqing
    Zhang, Yiming
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 1940 - 1953
  • [9] Human Action Recognition based on Variation Energy Images Features
    Xie, Haihui
    Wu, QingXiang
    2015 11TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION (ICNC), 2015, : 479 - 484
  • [10] Action Recognition in Radio Signals Based on Multi-Scale Deep Features
    Hao, Xiaojun
    Xu, Guangying
    Ma, Hongbin
    Yang, Shuyuan
    TENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2018), 2019, 11069