Novel Human Action Recognition in RGB-D Videos Based on Powerful View Invariant Features Technique

被引:2
作者
Mambou, Sebastien [1 ]
Krejcar, Ondrej [1 ]
Kuca, Kamil [1 ]
Selamat, Ali [1 ,2 ]
机构
[1] Univ Hradec Kralove, Fac Informat & Management, Ctr Basic & Appl Res, Rokitanskeho 62, Hradec Kralove 50003, Czech Republic
[2] Univ Teknol Malaysia, Fac Comp, Johor Baharu 81310, Johor, Malaysia
来源
MODERN APPROACHES FOR INTELLIGENT INFORMATION AND DATABASE SYSTEMS | 2018年 / 769卷
关键词
Action recognition; View point; Sample-affinity matrix; Cross-view actions; NUMA; IXMAS; REPRESENTATIONS;
D O I
10.1007/978-3-319-76081-0_29
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human action recognition is one of the important topic in nowadays research. It is obstructed by several factors, among them we can enumerate: the variation of shapes and postures of a human been, the time and memory space need to capture, store, label and process those images. In addition, recognize a human action from different view point is challenging due to the big amount of variation in each view, one possible solution of mentioned problem is to study different preferential View-invariant features sturdy enough to view variation. Our focus on this paper will be to solve mentioned problem by learning view shared and view specific features applying innovative deep models known as a novel sample-affinity matrix (SAM), able to give a good measurement of the similarities among video samples in different camera views. This will also lead to precisely adjust transmission between views and study more informative shared features involve in cross-view actions classification. In addition, we are proposing in this paper a novel view invariant features algorithm, which will give us a better understanding of the internal processing of our project. We have demonstrated through a series of experiment apply on NUMA and IXMAS (multiple camera view video dataset) that our method out performs state-of-the-art-methods.
引用
收藏
页码:343 / 353
页数:11
相关论文
共 35 条
[1]   Comparative study on classifying human activities with miniature inertial and magnetic sensors [J].
Altun, Kerem ;
Barshan, Billur ;
Tuncel, Orkun .
PATTERN RECOGNITION, 2010, 43 (10) :3605-3620
[2]  
[Anonymous], 2011, INT C MACHINE LEARNI
[3]  
[Anonymous], 2014, INT C MACH LEARN
[4]  
[Anonymous], 2012, P 29 INT C MACHINE L
[5]   Collective Matrix Factorization Hashing for Multimodal Data [J].
Ding, Guiguang ;
Guo, Yuchen ;
Zhou, Jile .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :2083-2090
[6]   Low-Rank Common Subspace for Multi-view Learning [J].
Ding, Zhengming ;
Fu, Yun .
2014 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2014, :110-119
[7]  
Dollar P., 2005, Proceedings. 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS) (IEEE Cat. No. 05EX1178), P65
[8]   A Latent Model of Discriminative Aspect [J].
Farhadi, Ali ;
Tabrizi, Mostafa Kamali ;
Endres, Ian ;
Forsyth, David .
2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2009, :948-955
[9]  
Grabocka J., 2012, P 26 AAAI C ART INT, P928
[10]   3D Pose from Motion for Cross-view Action Recognition via Non-linear Circulant Temporal Encoding [J].
Gupta, Ankur ;
Martinez, Julieta ;
Little, James J. ;
Woodham, Robert J. .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :2601-2608