Learning Adaptive Discriminative Correlation Filters via Temporal Consistency Preserving Spatial Feature Selection for Robust Visual Object Tracking

被引:304
作者
Xu, Tianyang [1 ,2 ]
Feng, Zhen-Hua [2 ]
Wu, Xiao-Jun [1 ]
Kittler, Josef [2 ]
机构
[1] Jiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R China
[2] Univ Surrey, Ctr Vis Speech & Signal Proc, Guildford GU2 7XH, Surrey, England
基金
中国国家自然科学基金; 英国工程与自然科学研究理事会;
关键词
Visual object tracking; correlation filter; feature selection; temporal consistency; REGRESSION;
D O I
10.1109/TIP.2019.2919201
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With efficient appearance learning models, discriminative correlation filter (DCF) has been proven to be very successful in recent video object tracking benchmarks and competitions. However, the existing DCF paradigm suffers from two major issues, i.e., spatial boundary effect and temporal filter degradation. To mitigate these challenges, we propose a new DCF-based tracking method. The key innovations of the proposed method include adaptive spatial feature selection and temporal consistent constraints, with which the new tracker enables joint spatial-temporal filter learning in a lower dimensional discriminative manifold. More specifically, we apply structured spatial sparsity constraints to multi-channel filters. Consequently, the process of learning spatial filters can be approximated by the lasso regularization. To encourage temporal consistency, the filter model is restricted to lie around its historical value and updated locally to preserve the global structure in the manifold. Last, a unified optimization framework is proposed to jointly select temporal consistency preserving spatial features and learn discriminative filters with the augmented Lagrangian method. Qualitative and quantitative evaluations have been conducted on a number of well-known benchmarking datasets such as OTB2013, OTB50, OTB100, Temple-Colour, UAV123, and VOT2018. The experimental results demonstrate the superiority of the proposed method over the state-of-the-art approaches.
引用
收藏
页码:5596 / 5609
页数:14
相关论文
共 84 条
  • [1] Adam A., 2006, IEEE C COMP VIS PATT, P798, DOI [DOI 10.1109/CVPR.2006.256, 10.1109/CVPR.2006.256]
  • [2] [Anonymous], FOUND TRENDS MACH LE
  • [3] [Anonymous], 2014, P BRIT MACH VIS C BM
  • [4] [Anonymous], PROC CVPR IEEE
  • [5] [Anonymous], 2006, P BRIT MACHINE VISIO
  • [6] [Anonymous], 2013, INTRO STAT LEARNING
  • [7] [Anonymous], 2016, ARXIV160106032
  • [8] [Anonymous], 2006, J ROYAL STAT SOC B
  • [9] [Anonymous], 2016, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2015.2509974
  • [10] [Anonymous], ARXIV180406833