Improvement on Tracking Based on Motion Model and Model Updater

被引:1
作者
Liu, Tong [1 ]
Xu, Chao [1 ]
Meng, Zhaopeng [1 ]
Xue, Wanli [2 ]
Li, Chao [3 ]
机构
[1] Tianjin Univ, Sch Comp Software, Tianjin 300354, Peoples R China
[2] Tianjin Univ, Sch Comp Sci & Technol, Tianjin, Peoples R China
[3] Tianjin Normal Univ, Coll Comp & Informat Engn, Tianjin, Peoples R China
来源
COMPUTER VISION, PT III | 2017年 / 773卷
基金
中国国家自然科学基金;
关键词
Visual tracking; Motion model; Model updater; Saliency detection; Image similarity; Image segmentation; VISUAL TRACKING; OBJECT TRACKING;
D O I
10.1007/978-981-10-7305-2_54
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Motion model and model updater are two important components for online visual tracking. On the one hand, an effective motion model needs to strike the right balance between target processing, to account for the target appearance and scene analysis, to describe stable background information. Most conventional trackers focus on one aspect out of the two and hence are not able to achieve the correct balance. On the other hand, the admirable model update needs to consider both the tracking speed and the model drift. Most tracking models are updated on every frame or fixed frames, so it cannot achieve the best state. In this paper, we approach the motion model problem by collaboratively using salient region detection and image segmentation. Particularly, the two methods are for different purposes. In the absence of prior knowledge, the former considers image attributes like color, gradient, edges and boundaries then forms a robust object; the latter aggregates individual pixels into meaningful atomic regions by using the prior knowledge of target and background in the video sequence. Taking advantage of their complementary roles, we construct a more reasonable confidence map. For model update problems, we dynamically update the model by analyzing scene with image similarity, which not only reduces the update frequency of the model but also suppresses the model drift. Finally, we integrate the two components into the pipeline of traditional tracker CT, and experiments demonstrate the effectiveness and robustness of the proposed components.
引用
收藏
页码:639 / 650
页数:12
相关论文
共 27 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]  
Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
[3]  
[Anonymous], 2016, ARXIV160106032
[4]  
Babenko B, 2009, PROC CVPR IEEE, P983, DOI 10.1109/CVPRW.2009.5206737
[5]   Adaptive Color Attributes for Real-Time Visual Tracking [J].
Danelljan, Martin ;
Khan, Fahad Shahbaz ;
Felsberg, Michael ;
van de Weijer, Joost .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :1090-1097
[6]   Occlusion-Aware Real-Time Object Tracking [J].
Dong, Xingping ;
Shen, Jianbing ;
Yu, Dajiang ;
Wang, Wenguan ;
Liu, Jianhong ;
Huang, Hua .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (04) :763-771
[7]  
Hare S, 2011, IEEE I CONF COMP VIS, P263, DOI 10.1109/ICCV.2011.6126251
[8]   Sigma Set Based Implicit Online Learning for Object Tracking [J].
Hong, Xiaopeng ;
Chang, Hong ;
Shan, Shiguang ;
Zhong, Bineng ;
Chen, Xilin ;
Gao, Wen .
IEEE SIGNAL PROCESSING LETTERS, 2010, 17 (09) :807-810
[9]  
Jia X, 2012, PROC CVPR IEEE, P1822, DOI 10.1109/CVPR.2012.6247880
[10]   Tracking-Learning-Detection [J].
Kalal, Zdenek ;
Mikolajczyk, Krystian ;
Matas, Jiri .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (07) :1409-1422