ReadingAct RGB-D action dataset and human action recognition from local features

被引:14
作者
Chen, Lulu [1 ]
Wei, Hong [1 ]
Ferryman, James [1 ]
机构
[1] Univ Reading, Sch Syst Engn, Computat Vis Grp, Reading RG6 6AY, Berks, England
关键词
Human action recognition; Depth sensor; Spatio-temporal local features; Dynamic time warping; ReadingAct action dataset; DENSE;
D O I
10.1016/j.patrec.2013.09.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For general home monitoring, a system should automatically interpret people's actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions. (C) 2013 Elsevier B.V. All rights reserved.
引用
收藏
页码:159 / 169
页数:11
相关论文
共 50 条
  • [31] Color-Aware Local Spatiotemporal Features for Action Recognition
    Souza, Fillipe
    Valle, Eduardo
    Chavez, Guillermo
    Araujo, Arnaldo de A.
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, 2011, 7042 : 248 - +
  • [32] Aeriform in-action: A novel dataset for human action recognition in aerial videos
    Kapoor, Surbhi
    Sharma, Akashdeep
    Verma, Amandeep
    Singh, Sarbjeet
    PATTERN RECOGNITION, 2023, 140
  • [33] Human action recognition employing negative space features
    Rahman, Shah Atiqur
    Leung, M. K. H.
    Cho, Siu-Yeung
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2013, 24 (03) : 217 - 231
  • [34] Human action recognition using bag of global and local Zernike moment features
    Aly, Saleh
    Sayed, Asmaa
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (17) : 24923 - 24953
  • [35] Human action recognition using bag of global and local Zernike moment features
    Saleh Aly
    Asmaa Sayed
    Multimedia Tools and Applications, 2019, 78 : 24923 - 24953
  • [36] Human action recognition based on hybrid features
    Zhong, Ju
    Liu, Huawen
    Lin, Chunli
    MECHATRONICS, ROBOTICS AND AUTOMATION, PTS 1-3, 2013, 373-375 : 1188 - +
  • [37] Human Action Recognition using Skeleton features
    Patil, Akash Anil
    Swaminathan, A.
    Rajan, Ashoka R.
    Narayanan, Neela, V
    Gayathri, R.
    2022 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT (ISMAR-ADJUNCT 2022), 2022, : 289 - 296
  • [38] Human Action Recognition Based on Skeleton Features
    Gao, Yi
    Wu, Haitao
    Wu, Xinmeng
    Li, Zilin
    Zhao, Xiaofan
    COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2023, 20 (01) : 537 - 550
  • [39] RGB-D based human action recognition using evolutionary self-adaptive extreme learning machine with knowledge-based control parameters
    Pareek, Preksha
    Thakkar, Ankit
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 14 (2) : 939 - 957
  • [40] RGB-D based human action recognition using evolutionary self-adaptive extreme learning machine with knowledge-based control parameters
    Preksha Pareek
    Ankit Thakkar
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 : 939 - 957