Learning Cross-domain Dictionary Pairs for Human Action Recognition

被引:0
作者
Zhang, Bingbing [1 ]
Shi, Dongcheng [1 ]
Ni, Kang [1 ]
Liang, Chao [1 ]
机构
[1] Changchun Univ Technol, Changchun 130012, Jilin, Peoples R China
来源
PROCEEDINGS OF THE 2015 2ND INTERNATIONAL WORKSHOP ON MATERIALS ENGINEERING AND COMPUTER SCIENCES (IWMECS 2015) | 2015年 / 33卷
关键词
Human action recognition; Local motion pattern; Dictionary learning;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper present a cross domain dictionary learning way, via the introduction of auxiliary domain, as the extra knowledge, the intra class diversity of the original training set (also known as the targetdomain) is effectively enhanced. Firstly, use local motion pattern feature as a low-level feature descriptor, and then through a cross domain reconstructive dictionary pairlearning, which brings the original target data and the auxiliary domain data into the same feature space to get correspondingsparse codes of each human action categories. Finally, classification and recognition is carried on thehuman action representation. Usingthe UCF YouTubedataset as theoriginal training setand the HMDB51 data set asthe auxiliary data set, the recognition rate of the proposed framework is significantly improvedon the UCF YouTube dataset.
引用
收藏
页码:423 / 428
页数:6
相关论文
共 15 条
[1]   K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation [J].
Aharon, Michal ;
Elad, Michael ;
Bruckstein, Alfred .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2006, 54 (11) :4311-4322
[2]  
Cao X, 2013, NEUROCOMPUTING, V100, P1
[3]  
Dollar P., 2005, VISUAL SURVEILLANCE, V14, P65, DOI DOI 10.1109/VSPETS.2005.1570899
[4]   Actions as space-time shapes [J].
Gorelick, Lena ;
Blank, Moshe ;
Shechtman, Eli ;
Irani, Michal ;
Basri, Ronen .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2007, 29 (12) :2247-2253
[5]   Learning Sparse Representations for Human Action Recognition [J].
Guha, Tanaya ;
Ward, Rabab Kreidieh .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (08) :1576-1588
[6]  
Harris C. G., 1988, P ALV VIS C MANCH UK, V15, P5244, DOI DOI 10.5244/C.2.23
[7]  
Jiang ZL, 2011, PROC CVPR IEEE, P1697, DOI 10.1109/CVPR.2011.5995354
[8]   On space-time interest points [J].
Laptev, I .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2005, 64 (2-3) :107-123
[9]  
Liu J, 2009, C ONCOMPUTER VIS PAT
[10]  
Mairal, 2009, ADV NEURAL INFORM PR