3D Pose from Motion for Cross-view Action Recognition via Non-linear Circulant Temporal Encoding

被引:77
作者
Gupta, Ankur [1 ]
Martinez, Julieta [1 ]
Little, James J. [1 ]
Woodham, Robert J. [1 ]
机构
[1] Univ British Columbia, Vancouver, BC V5Z 1M9, Canada
来源
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2014年
关键词
D O I
10.1109/CVPR.2014.333
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We describe a new approach to transfer knowledge across views for action recognition by using examples from a large collection of unlabelled mocap data. We achieve this by directly matching purely motion based features from videos to mocap. Our approach recovers 3D pose sequences without performing any body part tracking. We use these matches to generate multiple motion projections and thus add view invariance to our action recognition model. We also introduce a closed form solution for approximate non-linear Circulant Temporal Encoding (nCTE), which allows us to efficiently perform the matches in the frequency domain. We test our approach on the challenging unsupervised modality of the IXMAS dataset, and use publicly available motion capture data for matching. Without any additional annotation effort, we are able to significantly outperform the current state of the art.
引用
收藏
页码:2601 / 2608
页数:8
相关论文
共 33 条
[21]  
Li R., 2012, CVPR
[22]  
PARAMESWARAN V, 2006, IJCV, V66
[23]  
Perronnin Florent, 2010, CVPR
[24]  
Peursum Patrick., 2007, CVPR
[25]  
Pishchulin L., 2012, CVPR
[26]  
Ramanan D., 2003, NIPS
[27]  
RAO C, 2002, IJCV, V50
[28]  
Ullah M.M., 2012, ICIP
[29]  
Vondrak M., 2012, TOG, V31
[30]  
Wang J.M., 2005, NIPS