Inferring Ongoing Human Activities Based on Recurrent Self-Organizing Map Trajectory

被引:1
作者
Sun, Qianru [1 ]
Liu, Hong [2 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Engn Lab Intelligent Percept Internet Things ELIP, Beijing, Peoples R China
[2] Peking Univ, Key Laboratory of Machine Percept, Beijing, Peoples R China
来源
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2013 | 2013年
基金
中国国家自然科学基金; 国家高技术研究发展计划(863计划);
关键词
RECOGNITION;
D O I
10.5244/C.27.11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatically inferring ongoing activities is to enable the early recognition of unfinished activities, which is quite meaningful for applications, such as online human-machine interaction and security monitoring. State-of-the-art methods use the spatio-temporal interest point (STIP) based features as the low-level video description to handle complex scenes. While the existing problem is that typical bag-of-visual words (BoVW) focuses on the statistical distribution of features but ignores the inherent contexts in activity sequences, resulting in low discrimination when directly dealing with limited observations. To solve this problem, the Recurrent Self-Organizing Map (RSOM), which was designed to process sequential data, is novelly adopted in this paper for the high-level representation of ongoing human activities. The innovation lies that the currently observed features and their spatio-temporal contexts are encoded in a trajectory of the pre-trained RSOM units. Additionally, a combination of Dynamic Time Warping (DTW) distance and Edit distance, named DTW-E, is specially proposed to measure the structural dissimilarity between RSOM trajectories. Two real-world datasets with markedly different characteristics, complex scenes and inter-class ambiguities, serve as sources of data for evaluation. Experimental results based on kNN classifiers confirm that our approach can infer ongoing human activities with high accuracies.
引用
收藏
页数:11
相关论文
共 27 条
  • [1] [Anonymous], LECT NOTES INFORM SC
  • [2] [Anonymous], 2007, 2007 IEEE C COMP VIS
  • [3] [Anonymous], IEEE I CONF COMP VIS
  • [4] [Anonymous], P ICIP
  • [5] [Anonymous], 2009, P BRIT MACH VIS C
  • [6] Bashir FI, 2003, 2003 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL 2, PROCEEDINGS, P623
  • [7] Chaudhry R, 2009, PROC CVPR IEEE, P1932, DOI 10.1109/CVPRW.2009.5206821
  • [8] Dollar P., 2005, Proceedings. 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS) (IEEE Cat. No. 05EX1178), P65
  • [9] Keogh E. J., 2000, Proceedings. KDD-2000. Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, P285, DOI 10.1145/347090.347153
  • [10] Klaser A., 2008, BMVC 2008, V275, P1