Human Action Recognition from Inter-temporal Dictionaries of Key-Sequences

被引:0
作者
Alfaro, Anali [1 ]
Mery, Domingo [1 ]
Soto, Alvaro [1 ]
机构
[1] Pontificia Univ Catolica Chile, Dept Comp Sci, Santiago, Chile
来源
IMAGE AND VIDEO TECHNOLOGY, PSIVT 2013 | 2014年 / 8333卷
关键词
human action recognition; key-sequences; sparse coding; inter-temporal acts descriptor;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper addresses the human action recognition in video by proposing a method based on three main processing steps. First, we tackle problems related to intraclass variations and differences in video lengths. We achieve this by reducing an input video to a set of key-sequences that represent atomic meaningful acts of each action class. Second, we use sparse coding techniques to learn a representation for each key-sequence. We then join these representations still preserving information about temporal relationships. We believe that this is a key step of our approach because it provides not only a suitable shared representation to characterize atomic acts, but it also encodes global temporal consistency among these acts. Accordingly, we call this representation inter-temporal acts descriptor. Third, we use this representation and sparse coding techniques to classify new videos. Finally, we show that, our approach outperforms several state-of-the-art methods when is tested using common benchmarks.
引用
收藏
页码:419 / 430
页数:12
相关论文
共 50 条
  • [41] Exploring hybrid spatio-temporal convolutional networks for human action recognition
    Wang, Hao
    Yang, Yanhua
    Yang, Erkun
    Deng, Cheng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (13) : 15065 - 15081
  • [42] A Human Action Recognition Method Based on Tchebichef Moment Invariants and Temporal Templates
    Lu, Yanan
    Li, Yakang
    Shen, Yang
    Ding, Fang
    Wang, Xiaofeng
    Hu, Jicheng
    Ding, Songtao
    2012 4TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC), VOL 2, 2012, : 76 - 79
  • [43] Human Action Recognition Based on Self-learned Key Frames and Features Extraction
    Fu, Qi
    Liu, Lina
    Ma, Shiwei
    2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 3498 - 3502
  • [44] Key frame and skeleton extraction for deep learning-based human action recognition
    Hai-Hong Phan
    Trung Tin Nguyen
    Ngo Huu Phuc
    Nguyen Huu Nhan
    Do Minh Hieu
    Cao Truong Tran
    Bao Ngoc Vi
    2021 RIVF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION TECHNOLOGIES (RIVF 2021), 2021, : 180 - 185
  • [45] Robust Human Action Recognition Using Global Spatial-Temporal Attention for Human Skeleton Data
    Han, Yun
    Chung, Sheng-Luen
    Ambikapathi, ArulMurugan
    Chan, Jui-Shan
    Lin, Wei-You
    Su, Shun-Feng
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [46] Adaptive recognition method of human skeleton action with spatial-temporal tensor fusion
    Jian Z.
    Nan J.
    Liu X.
    Dai W.
    Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2023, 44 (06): : 74 - 85
  • [47] ATOMIC HUMAN ACTION SEGMENTATION AND RECOGNITION USING A SPATIO-TEMPORAL PROBABILISTIC FRAMEWORK
    Chen, Duan-Yu
    Liao, Hong-Yuan Mark
    Shih, Sheng-Wen
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2007, 1 (02) : 205 - 220
  • [48] Human action recognition based on graph-embedded spatio-temporal subspace
    Tseng, Chien-Chung
    Chen, Ju-Chin
    Fang, Ching-Hsien
    Lien, Jenn-Jier James
    PATTERN RECOGNITION, 2012, 45 (10) : 3611 - 3624
  • [49] An Unsupervised Feature learning and clustering method for key frame extraction on human action recognition
    Pei, Xiaomin
    Fan, Huijie
    Tang, Yandong
    2017 IEEE 7TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2017, : 759 - 762
  • [50] Depthwise Spatio-Temporal STFT Convolutiona Neural Networks for Human Action Recognition
    Kumawat, Sudhakar
    Verma, Manisha
    Nakashima, Yuta
    Raman, Shanmuganathan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 4839 - 4851