Real-time multi-scale tracking based on compressive sensing

被引:0
作者
Yunxia Wu
Ni Jia
Jiping Sun
机构
[1] China University of Mining and Technology,School of Mechanical Electronic and Information Engineering
来源
The Visual Computer | 2015年 / 31卷
关键词
Tracking-by-detection; Multi-scale; Compressive tracking; Bootstrap filter; Real time;
D O I
暂无
中图分类号
学科分类号
摘要
Tracking-by-detection methods have been widely studied and some promising results have been obtained. These methods use discriminative appearance models to train and update online classifiers. They also use a sliding window to detect samples which will then be classified. Then, the location of the sample with the maximum classifier response will be selected as the new location. Compressive tracking was recently proposed with an appearance model based on features extracted in the compressed domain. However, CT uses a fixed-size tracking box to detect samples, and this is unsuitable for practice applications. CT detects samples around the selected region of the previous frame within a fixed radius. Here, the classifier may become inaccurate if the selected region drifts. The fixed radius is also not suitable for tracking targets that experience abrupt acceleration changes. Furthermore, CT updates the classifier parameters with constant learning rate. If the target is fully occluded for an extended period, the classifier will instead learn the features of the cover object and the target will ultimately be lost. In this paper, we present a multi-scale compressive tracker. This tracker integrates an improved appearance model based on normalized rectangle features extracted in the adaptive compressive domain into the bootstrap filter. This type of feature extraction is efficient, and the computation complexity does not increase as the tracking regions become larger. The classifier response is utilized to generate particle importance weight and a re-sample procedure preserves samples according to weight. A 2-order transition model considers the target velocity to estimate the current position and scale status. In this way, the sampling is not limited to a fixed range. Here, feedback strategies are adopted to adjust learning rate for occlusion. Experimental results on various benchmark challenging sequences have demonstrated the superior performance of our tracker when compared with several state-of-the-art tracking algorithms.
引用
收藏
页码:471 / 484
页数:13
相关论文
共 29 条
  • [1] Isard M(1998)Condensation-conditional density propagation for visual tracking Int. J. Comput. Vision. 29 5-28
  • [2] Blake A(2004)Robust visual tracking by integrating multiple cues based on co-inference learning Int. J. Comput. Vision. 58 55-71
  • [3] Wu Y(2003)Robust online appearance models for visual tracking IEEE Trans. Pattern Anal. Mach. Intell. 25 1296-1311
  • [4] Huang TS(2008)Incremental learning for robust visual tracking Int. J. Comput. Vision. 77 125-141
  • [5] Jepson AD(2004)Support vector tracking IEEE Trans. Pattern Anal. Mach. Intell. 26 1064-1072
  • [6] Fleet DJ(2005)Online selection of discriminative tracking features IEEE Trans. Pattern Anal. Mach. Intell. 27 1631-1643
  • [7] El-Maraghi TF(2011)Robust object tracking with online multiple instance learning IEEE Trans. Pattern Anal. Mach. Intell. 33 1619-1632
  • [8] Ross DA(2002)A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking IEEE Trans. Signal Process. 50 174-188
  • [9] Lim J(1993)Novel approach to nonlinear/non-Gaussian Bayesian state estimation IEE Proc. Part F. Radar Signal Process. 140 107-113
  • [10] Lin RS(2013)Robust visual tracking via structured multi-task sparse learning Int. J. Comput. Vision. 101 367-383