Robust visual object tracking using context-based spatial variation via multi-feature fusion

被引:27
|
作者
Elayaperumal, Dinesh [1 ]
Joo, Young Hoon [1 ]
机构
[1] Kunsan Natl Univ, Sch IT Informat & Control Engn, 588 Daehak Ro, Gunsan Si 54150, Jeonbuk, South Korea
基金
新加坡国家研究基金会;
关键词
Correlation filter; Context; Spatial variation; Feature fusion; ADMM; Object tracking; CORRELATION FILTERS;
D O I
10.1016/j.ins.2021.06.084
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the emergence of camera technology, visual tracking has witnessed great attention in the field of computer vision. For instance, numerous discriminative correlation filter (DCF) methods are broadly used in tracking, nevertheless, most of them fail to efficiently find the target in challenging situations which leads to tracking failure throughout the sequences. In order to handle these issues, we propose contextual information based spatial variation with a multi-feature fusion method (CSVMF) for robust object tracking. This work incorporates the contextual information of the target to determine the location of the target accurately, which utilizes the relationship between the target and its surroundings to increase the efficiency of the tracker. In addition, we integrate the spatial variation information which measures the second-order difference of the filter to avoid the over-fitting problem caused by the changes in filter coefficient. Furthermore, we adopt multi-feature fusion strategy to enhance the target appearance by using different metrics. The tracking results from different features are fused by employing peak-to-sidelobe ratio (PSR) which measures the peak strength of the response. Finally, we conduct extensive experiments on TC128, DTB70, UAV123@10fps, and UAV123 datasets to demonstrate that the proposed method achieves a favorable performance over the existing ones. (c) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页码:467 / 482
页数:16
相关论文
共 50 条
  • [1] Robust Visual Tracking Based on Adaptive Multi-Feature Fusion Using the Tracking Reliability Criterion
    Zhou, Lin
    Wang, Han
    Jin, Yong
    Hu, Zhentao
    Wei, Qian
    Li, Junwei
    Li, Jifang
    SENSORS, 2020, 20 (24) : 1 - 19
  • [2] Robust object tracking via multi-feature adaptive fusion based on stability: contrast analysis
    Zhiyong Li
    Shuang He
    Mervat Hashem
    The Visual Computer, 2015, 31 : 1319 - 1337
  • [3] Robust object tracking via multi-feature adaptive fusion based on stability: contrast analysis
    Li, Zhiyong
    He, Shuang
    Hashem, Mervat
    VISUAL COMPUTER, 2015, 31 (10): : 1319 - 1337
  • [4] ADAPTIVE MULTI-FEATURE FUSION FOR ROBUST OBJECT TRACKING
    Liu, Mengxue
    Qi, Yujuan
    Wang, Yanjiang
    Liu, Baodi
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1884 - 1888
  • [5] Multi-feature Fusion Based Object Detecting and Tracking
    Lu, Hong
    Li, Hongsheng
    Chai, Lin
    Fei, Shumin
    Liu, Guangyun
    MATERIALS AND COMPUTATIONAL MECHANICS, PTS 1-3, 2012, 117-119 : 1824 - +
  • [6] Multi-feature fusion tracking algorithm based on peak-context learning
    Bouraffa, Tayssir
    Feng, Zihang
    Yan, Liping
    Xia, Yuanqing
    Xiao, Bo
    IMAGE AND VISION COMPUTING, 2022, 123
  • [7] Object tracking via Spatio-Temporal Context learning based on multi-feature fusion in stationary scene
    Cheng, Yunfei
    Wang, Wu
    AOPC 2017: OPTICAL SENSING AND IMAGING TECHNOLOGY AND APPLICATIONS, 2017, 10462
  • [8] Robust thermal infrared tracking via an adaptively multi-feature fusion model
    Yuan, Di
    Shu, Xiu
    Liu, Qiao
    Zhang, Xinming
    He, Zhenyu
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (04): : 3423 - 3434
  • [9] Research on Object Tracking Algorithm via Adaptive Multi-Feature Fusion
    Xia, Runlong
    Chen, Yuantao
    INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND ROBOTICS 2021, 2021, 11884
  • [10] Robust Object Tracking Based on Timed Motion History Image With Multi-feature Adaptive Fusion
    Li, Zhiyong
    Gao, Song
    Nai, Ke
    Zeng, Ying
    2016 12TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (ICNC-FSKD), 2016, : 845 - 851