Learning reliable-spatial and spatial-variation regularization correlation filters for visual tracking

被引:8
作者
Fu, Hengcheng [1 ]
Zhang, Yihong [1 ]
Zhou, Wuneng [1 ]
Wang, Xiaofeng [1 ]
Zhang, Huanlong [2 ]
机构
[1] Donghua Univ, Coll Informat Sci & Technol, Shanghai 201620, Peoples R China
[2] Zhengzhou Univ Light Ind, Coll Elect & Informat Engn, 5 Dongfeng Rd, Zhengzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Correlation filters; Visual tracking; Spatial regularization; OBJECT TRACKING; ROBUST;
D O I
10.1016/j.imavis.2020.103869
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Single-object tracking is a significant and challenging computer vision problem. Recently, discriminative correlation filters (DCF) have shown excellent performance. But there is a theoretical defects that the boundary effect, caused by the periodic assumption of training samples, greatly limit the tracking performance. Spatially regularized DCF (SRDCF) introduces a spatial regularization to penalize the filter coefficients depending on their spatial location, which improves the tracking performance a lot. However, this simple regularization strategy implements unequal penalties for the target area filter coefficients, which makes the filter learn a distorted object appearance model. In this paper, a novel spatial regularization strategy is proposed, utilizing a reliability map to approximate the target area and to keep the penalty coefficients of relevant region consistent. Besides, we introduce a spatial variation regularization component that the second-order difference of the filter, which smooths changes of filter coefficients to prevent the filter over-fitting current frame. Furthermore, an efficient optimization algorithm called alternating direction method of multipliers (ADMM) is developed. Comprehensive experiments are performed on three benchmark datasets: OTB-2013, OTB-2015 and TempleColor-128, and our algorithm achieves a more favorable performance than several state-of-the-art methods. Compared with SRDCF, our approach obtains an absolute gain of 6.6% and 5.1% in mean distance precision on OTB-2013 and OTB-2015, respectively. Our approach runs in real-time on a CPU. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:9
相关论文
共 46 条
  • [1] [Anonymous], P COMP VIS PATT REC
  • [2] [Anonymous], 2014, P BRIT MACH VIS C BM
  • [3] [Anonymous], P COMP VIS PATT REC
  • [4] [Anonymous], 2006, P IEEE COMP SOC C CO, DOI DOI 10.1109/CVPR.2006.215
  • [5] Robust Object Tracking with Online Multiple Instance Learning
    Babenko, Boris
    Yang, Ming-Hsuan
    Belongie, Serge
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (08) : 1619 - 1632
  • [6] Bai YC, 2012, PROC CVPR IEEE, P1854, DOI 10.1109/CVPR.2012.6247884
  • [7] Staple: Complementary Learners for Real-Time Tracking
    Bertinetto, Luca
    Valmadre, Jack
    Golodetz, Stuart
    Miksik, Ondrej
    Torr, Philip H. S.
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1401 - 1409
  • [8] Fully-Convolutional Siamese Networks for Object Tracking
    Bertinetto, Luca
    Valmadre, Jack
    Henriques, Joao F.
    Vedaldi, Andrea
    Torr, Philip H. S.
    [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 : 850 - 865
  • [9] Histograms of oriented gradients for human detection
    Dalal, N
    Triggs, B
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 886 - 893
  • [10] Learning Spatially Regularized Correlation Filters for Visual Tracking
    Danelljan, Martin
    Hager, Gustav
    Khan, Fahad Shahbaz
    Felsberg, Michael
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 4310 - 4318