Leveraging Structural Context Models and Ranking Score Fusion for Human Interaction Prediction

被引:28
作者
Ke, Qiuhong [1 ]
Bennamoun, Mohammed [1 ]
An, Senjian [1 ]
Sohel, Ferdous [2 ]
Boussaid, Farid [3 ]
机构
[1] Univ Western Australia, Sch Comp Sci & Software Engn, Crawley, WA 6009, Australia
[2] Murdoch Univ, Sch Engn & Informat Technol, Murdoch, WA 6150, Australia
[3] Univ Western Australia, Sch Elect Elect & Comp Engn, Crawley, WA 6009, Australia
基金
澳大利亚研究理事会;
关键词
Interaction prediction; interaction structure; LSTM; ranking score fusion; ACTION RECOGNITION;
D O I
10.1109/TMM.2017.2778559
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Predicting an interaction before it is fully executed is very important in applications, such as human-robot interaction and video surveillance. In a two-human interaction scenario, there are often contextual dependency structures between the global interaction context of the two humans and the local context of the different body parts of each human. In this paper, we propose to learn the structure of the interaction contexts and combine it with the spatial and temporal information of a video sequence to better predict the interaction class. The structural models, including the spatial and the temporal models, are learned with long short term memory (LSTM) networks to capture the dependency of the global and local contexts of each RGB frame and each optical flow image, respectively. LSTM networks are also capable of detecting the key information from global and local interaction contexts. Moreover, to effectively combine the structural models with the spatial and temporal models for interaction prediction, a ranking score fusion method is introduced to automatically compute the optimal weight of each model for score fusion. Experimental results on the BIT-Interaction Dataset and the UT-Interaction Dataset clearly demonstrate the benefits of the proposed method.
引用
收藏
页码:1712 / 1723
页数:12
相关论文
共 54 条
  • [1] [Anonymous], 2002, TUTORIAL TRAINING RE
  • [2] [Anonymous], 2002, P ACM SIGKDD KDD 200
  • [3] [Anonymous], 2001, PROC 18 INT C MACH L
  • [4] [Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.214
  • [5] Recognize Human Activities from Partially Observed Videos
    Cao, Yu
    Barrett, Daniel
    Barbu, Andrei
    Narayanaswamy, Siddharth
    Yu, Haonan
    Michaux, Aaron
    Lin, Yuewei
    Dickinson, Sven
    Siskind, Jeffrey Mark
    Wang, Song
    [J]. 2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 2658 - 2665
  • [6] The devil is in the details: an evaluation of recent feature encoding methods
    Chatfield, Ken
    Lempitsky, Victor
    Vedaldi, Andrea
    Zisserman, Andrew
    [J]. PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2011, 2011,
  • [7] A daily behavior enabled hidden Markov model for human behavior understanding
    Chung, Pau-Choo
    Liu, Chin-De
    [J]. PATTERN RECOGNITION, 2008, 41 (05) : 1572 - 1580
  • [8] Histograms of oriented gradients for human detection
    Dalal, N
    Triggs, B
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 886 - 893
  • [9] Global and local vision in natural scene identification
    De Cesarei, Andrea
    Loftus, Geoffrey R.
    [J]. PSYCHONOMIC BULLETIN & REVIEW, 2011, 18 (05) : 840 - 847
  • [10] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848