ReadingAct RGB-D action dataset and human action recognition from local features

被引:14
作者
Chen, Lulu [1 ]
Wei, Hong [1 ]
Ferryman, James [1 ]
机构
[1] Univ Reading, Sch Syst Engn, Computat Vis Grp, Reading RG6 6AY, Berks, England
关键词
Human action recognition; Depth sensor; Spatio-temporal local features; Dynamic time warping; ReadingAct action dataset; DENSE;
D O I
10.1016/j.patrec.2013.09.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For general home monitoring, a system should automatically interpret people's actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions. (C) 2013 Elsevier B.V. All rights reserved.
引用
收藏
页码:159 / 169
页数:11
相关论文
共 50 条
  • [1] Human Action Recognition Using RGB-D Image Features
    Tang C.
    Wang W.
    Zhang C.
    Peng H.
    Li W.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2019, 32 (10): : 901 - 908
  • [2] Arbitrary-View Human Action Recognition: A Varying-View RGB-D Action Dataset
    Ji, Yanli
    Yang, Yang
    Shen, Fumin
    Shen, Heng Tao
    Zheng, Wei-Shi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (01) : 289 - 300
  • [3] Complex Network-based features extraction in RGB-D human action recognition
    Barkoky, Alaa
    Charkari, Nasrollah Moghaddam
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 82
  • [4] Learning Human Pose Models from Synthesized Data for Robust RGB-D Action Recognition
    Liu, Jian
    Rahmani, Hossein
    Akhtar, Naveed
    Mian, Ajmal
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2019, 127 (10) : 1545 - 1564
  • [5] Learning Human Pose Models from Synthesized Data for Robust RGB-D Action Recognition
    Jian Liu
    Hossein Rahmani
    Naveed Akhtar
    Ajmal Mian
    International Journal of Computer Vision, 2019, 127 : 1545 - 1564
  • [6] Evaluating fusion of RGB-D and inertial sensors for multimodal human action recognition
    Javed Imran
    Balasubramanian Raman
    Journal of Ambient Intelligence and Humanized Computing, 2020, 11 : 189 - 208
  • [7] Evaluating fusion of RGB-D and inertial sensors for multimodal human action recognition
    Imran, Javed
    Raman, Balasubramanian
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2020, 11 (01) : 189 - 208
  • [8] Coupled hidden conditional random fields for RGB-D human action recognition
    Liu, An-An
    Nie, Wei-Zhi
    Su, Yu-Ting
    Ma, Li
    Hao, Tong
    Yang, Zhao-Xuan
    SIGNAL PROCESSING, 2015, 112 : 74 - 82
  • [9] Evolutionary joint selection to improve human action recognition with RGB-D devices
    Andre Chaaraoui, Alexandros
    Ramon Padilla-Lopez, Jose
    Climent-Perez, Pau
    Florez-Revuelta, Francisco
    EXPERT SYSTEMS WITH APPLICATIONS, 2014, 41 (03) : 786 - 794
  • [10] LEARNED SPATIO-TEMPORAL TEXTURE DESCRIPTORS FOR RGB-D HUMAN ACTION RECOGNITION
    Zhai, Zhengyuan
    Fan, Chunxiao
    Ming, Yue
    COMPUTING AND INFORMATICS, 2018, 37 (06) : 1339 - 1362