Multi-view representation learning for multi-view action recognition

被引:26
作者
Hao, Tong [1 ]
Wu, Dan [1 ]
Wang, Qian [1 ]
Sun, Jin-Sheng [1 ,2 ]
机构
[1] Tianjin Normal Univ, Tianjin Key Lab Anim & Plant Resistance, Coll Life Sci, Tianjin 300387, Peoples R China
[2] Tianjin Aquat Anim Infect Dis Control & Prevent C, Tianjin 300221, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-view learning; Multi-task learning; Sparse coding; Action recognition; MODEL; DICTIONARY;
D O I
10.1016/j.jvcir.2017.01.019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although multiple methods have been proposed for human action recognition, the existing multi-view approaches cannot well discover meaningful relationship among multiple action categories from different views. To handle this problem, this paper proposes an multi-view learning approach for multi-view action recognition. First, the proposed method leverages the popular visual representation method, bag of -visual-words (BoVW)/fisher vector (FV), to represent individual videos in each view. Second, the sparse coding algorithm is utilized to transfer the low-level features of various views into the discriminative and high-level semantics space. Third, we employ the multi-task learning (MTL) approach for joint action modeling and discovery of latent relationship among different action categories. The extensive experimental results on (MI)-I-2 and IXMAS datasets have demonstrated the effectiveness of our proposed approach. Moreover, the experiments further demonstrate that the discovered latent relationship can benefit multi-view model learning to augment the performance of action recognition. (C) 2017 Published by Elsevier Inc.
引用
收藏
页码:453 / 460
页数:8
相关论文
共 54 条
[1]   Human Activity Analysis: A Review [J].
Aggarwal, J. K. ;
Ryoo, M. S. .
ACM COMPUTING SURVEYS, 2011, 43 (03)
[2]   K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation [J].
Aharon, Michal ;
Elad, Michael ;
Bruckstein, Alfred .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2006, 54 (11) :4311-4322
[3]  
[Anonymous], ICPR
[4]  
[Anonymous], 2008 IEEE COMP SOC C
[5]  
[Anonymous], IEEE INT C COMP VIS
[6]  
[Anonymous], 2014, ABS14054506 CORR
[7]  
[Anonymous], CVPR
[8]  
[Anonymous], 2007, ICCV
[9]  
[Anonymous], 2008 IEEE COMP SOC C
[10]  
[Anonymous], MACH LEARN P 21 INT