共 50 条
LEARNING A TEMPORALLY INVARIANT REPRESENTATION FOR VISUAL TRACKING
被引:0
作者:
Ma, Chao
[1
,2
]
Yang, Xiaokang
[1
]
Zhang, Chongyang
[1
]
Yang, Ming-Hsuan
[2
]
机构:
[1] Shanghai Jiao Tong Univ, Shanghai 200030, Peoples R China
[2] Univ Calif, Merced, CA USA
来源:
2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)
|
2015年
基金:
美国国家科学基金会;
关键词:
temporal invariance;
feature learning;
correlation filters;
object tracking;
OBJECT;
D O I:
暂无
中图分类号:
TM [电工技术];
TN [电子技术、通信技术];
学科分类号:
0808 ;
0809 ;
摘要:
In this paper, we propose to learn temporally invariant features from a large number of image sequences to represent objects for visual tracking. These features are trained on a convolutional neural network with temporal invariance constraints and robust to diverse motion transformations. We employ linear correlation filters to encode the appearance templates of targets and perform the tracking task by searching for the maximum responses at each frame. The learned filters are updated online and adapt to significant appearance changes during tracking. Extensive experimental results on challenging sequences show that the proposed algorithm performs favorably against state-of-the-art methods in terms of efficiency, accuracy, and robustness.
引用
收藏
页码:857 / 861
页数:5
相关论文
共 50 条