Object tracking in the presence of shaking motions

被引:11
作者
Dai, Manna [1 ,2 ,3 ]
Cheng, Shuying [1 ,4 ]
He, Xiangjian [2 ]
Wang, Dadong [3 ]
机构
[1] Fuzhou Univ, Inst Micro Nano Devices & Solar Cells, Coll Phys & Informat Engn, Fuzhou, Fujian, Peoples R China
[2] Univ Technol Sydney, Sydney, NSW, Australia
[3] CSIRO, Sydney, NSW, Australia
[4] Jiangsu Collaborat Innovat Ctr Photovolat Sci & E, Changzhou, Peoples R China
关键词
Shaking targets; Uniform sampling; Kernel; Temporal and spatial context; ROBUST VISUAL TRACKING; KALMAN FILTER;
D O I
10.1007/s00521-018-3387-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visual tracking can be particularly interpreted as a process of searching for targets and optimizing the searching. In this paper, we present a novel tracker framework for tracking shaking targets. We formulate the underlying geometrical relevance between a search scope and a target displacement. A uniform sampling among the search scopes is implemented by sliding windows. To alleviate any possible redundant matching, we propose a double-template structure comprising of initial and previous tracking results. The element-wise similarities between a template and its candidates are calculated by jointly using kernel functions which provide a better outlier rejection property. The STC algorithm is used to improve the tracking results by maximizing a confidence map incorporating temporal and spatial context cues about the tracked targets. For better adaptation to appearance variations, we employ a linear interpolation to update the context prior probability of the STC method. Both qualitative and quantitative evaluations are performed on all sequences that contain shaking motions and are selected from the OTB-50 challenging benchmark. The proposed approach is compared with and outperforms 12 state-of-the-art tracking methods on the selected sequences while running on MATLAB without code optimization. We have also performed further experiments on the whole OTB-50 and VOT 2015 datasets. Although the most of sequences in these two datasets do not contain motion blur that this paper is focusing on, the results of our method are still favorable compared with all of the state-of-the-art approaches.
引用
收藏
页码:5917 / 5934
页数:18
相关论文
共 49 条
[1]  
Adam A., 2006, IEEE C COMP VIS PATT, P798, DOI [DOI 10.1109/CVPR.2006.256, 10.1109/CVPR.2006.256]
[2]  
[Anonymous], 2016, Applications of Computer Vision (WACV)
[3]  
[Anonymous], 2012, PROC CVPR IEEE
[4]   Robust Object Tracking with Online Multiple Instance Learning [J].
Babenko, Boris ;
Yang, Ming-Hsuan ;
Belongie, Serge .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (08) :1619-1632
[5]  
Bhattacharyya A, 1946, SANKHYA, V7, P401
[6]   EigenTracking: Robust matching and tracking of articulated objects using a view-based representation [J].
Black, MJ ;
Jepson, AD .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 1998, 26 (01) :63-84
[7]  
Cuevas ErikV., 2005, Kalman filter for vision tracking
[8]  
Dai Manna, 2015, MultiMedia Modeling. 21st International Conference, MMM 2015, January 5-7, 2015, Proceedings: LNCS 8935, P25, DOI 10.1007/978-3-319-14445-0_3
[9]   Evolutionary study on mobile cloud computing [J].
Dai, Mingjun ;
Liu, Dujuan ;
Fan, Yongjun ;
Wang, Hui ;
Lin, Xiaohui ;
Chen, Bin ;
Lu, Zexin .
NEURAL COMPUTING & APPLICATIONS, 2017, 28 (09) :2735-2744
[10]   Discriminative Scale Space Tracking [J].
Danelljan, Martin ;
Hager, Gustav ;
Khan, Fahad Shahbaz ;
Felsberg, Michael .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (08) :1561-1575