ROBUST OBJECT TRACKING VIA MULTI-TASK DYNAMIC SPARSE MODEL

被引:0
作者
Ji, Zhangjian [1 ]
Wang, Weiqiang [1 ]
机构
[1] Univ Chinese Acad Sci, Sch Comp & Control Engn, Beijing, Peoples R China
来源
2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | 2014年
关键词
Multi-task learning; Dynamic sparse model; Object tracking; VISUAL TRACKING;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recently, sparse representation has been widely applied to some generative tracking methods, which learn the representation of each particle independently and do not consider the correlation between the representation of each particle in the time domain. In this paper, we formulate the object tracking in a particle filter framework as a multi-task dynamic sparse learning problem, which we denote as Multi-Task Dynamic Sparse Tracking(MTDST). By exploring the popular sparsity-inducing l(1,2) mixed norms, we regularize the representation problem to enforce joint sparsity and learn the particle representations together. Meanwhile, we also introduce the innovation sparse term in the tracking model. As compared to previous methods, our method mines the independencies between particles and the correlation of particle representation in the time domain, which improves the tracking performance. In addition, because the loft least square is robust to the outliers, we adopt the loft least square to replace the least square to calculate the likelihood probability. In the updating scheme, we eliminate the influences of occlusion pixels when updating the templates. The comprehensive experiments on the several challenging image sequences demonstrate that the proposed method consistently outperforms the existing state-of-the-art methods.
引用
收藏
页码:393 / 397
页数:5
相关论文
共 26 条