Effective template update mechanism in visual tracking with background clutter

被引:55
作者
Liu, Shuai [1 ,2 ]
Liu, Dongye [3 ]
Muhammad, Khan [4 ]
Ding, Weiping [5 ]
机构
[1] Hunan Normal Univ, Hunan Prov Key Lab Intelligent Comp & Language In, Changsha 410000, Peoples R China
[2] Hunan Noma Univ, Coll Informat Sci & Engn, Changsha 410000, Peoples R China
[3] Inner Mongolia Univ, Coll Comp Sci, Hohhot 010010, Peoples R China
[4] Sejong Univ, Dept Software, Seoul 143747, South Korea
[5] Nantong Univ, Sch Informat Sci & Technol, Nantong 226019, Peoples R China
基金
中国国家自然科学基金;
关键词
Artificial intelligence; Background clutter; Video data; Template update; Visual tracking; OBJECT TRACKING; SCALE;
D O I
10.1016/j.neucom.2019.12.143
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today, artificial intelligence is everywhere in people's daily lives. Visual tracking, which is used to identify and continuously track specific targets, is an important research domain in the study of artificial intelligence. However, current visual tracking methods are not accurate enough for object tracking with background clutter, which can easily lead to tracking failures. Therefore, in this paper, in order to solve the problem of tracking failure in clutter background, we propose a template update mechanism to improve the accuracy of visual tracking. First, an original template is saved when the background clutter is detected. During background clutter, we use both the original template and the current template at the location estimated by the optical flow and choose better one. Next, the original template is reused after the background clutter is ended. Finally, the proposed mechanism is used both in the KCF and BACF algorithm to verify the effectiveness of the mechanism. With experiments on the OTB2015 dataset, results show that the proposed mechanism has improved accuracy and success rate of the two baseline algorithms. Meanwhile, in state-of-the-art algorithms, the algorithm using the proposed mechanism also has excellent tracking performance. In addition, this method also has strong tracking robustness and adaptation capability to sequential learning for video data. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:615 / 625
页数:11
相关论文
共 46 条
[1]  
[Anonymous], 1981, PROC INT JOINT C ART
[2]   Staple: Complementary Learners for Real-Time Tracking [J].
Bertinetto, Luca ;
Valmadre, Jack ;
Golodetz, Stuart ;
Miksik, Ondrej ;
Torr, Philip H. S. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1401-1409
[3]   Unveiling the Power of Deep Tracking [J].
Bhat, Goutam ;
Johnander, Joakim ;
Danelljan, Martin ;
Khan, Fahad Shahbaz ;
Felsberg, Michael .
COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 :493-509
[4]  
Bolme DS, 2010, PROC CVPR IEEE, P2544, DOI 10.1109/CVPR.2010.5539960
[5]   Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation [J].
Brox, Thomas ;
Malik, Jitendra .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (03) :500-513
[6]   Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods [J].
Bruhn A. ;
Weickert J. ;
Schnörr C. .
International Journal of Computer Vision, 2005, 61 (3) :1-21
[7]   BIT: Biologically Inspired Tracker [J].
Cai, Bolun ;
Xu, Xiangmin ;
Xing, Xiaofen ;
Jia, Kui ;
Miao, Jie ;
Tao, Dacheng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (03) :1327-1339
[8]   A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging [J].
Chambolle, Antonin ;
Pock, Thomas .
JOURNAL OF MATHEMATICAL IMAGING AND VISION, 2011, 40 (01) :120-145
[9]   Lip segmentation and tracking under MAP-MRF framework with unknown segment number [J].
Cheung, Yiu-ming ;
Li, Meng ;
Cao, Xiaochun .
NEUROCOMPUTING, 2013, 104 :155-169
[10]  
Choi, 2008, 2008 19 INT C PATT R, P1