Learning dynamic spatial-temporal regularized correlation filter tracking with response deviation suppression via multi-feature fusion

被引:20
作者
Moorthy, Sathishkumar [1 ]
Joo, Young Hoon [1 ]
机构
[1] Kunsan Natl Univ, Sch IT Informat & Control Engn, 558 Daehak Ro, Gunsan Si 54150, Jeonbuk, South Korea
基金
新加坡国家研究基金会;
关键词
Feature fusion; Correlation filters; Spatial-temporal information; Response deviation-suppression; Visual tracking; OBJECT TRACKING;
D O I
10.1016/j.neunet.2023.08.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visual object tracking (VOT) for intelligent video surveillance has attracted great attention in the current research community, thanks to advances in computer vision and camera technology. Meanwhile, discriminative correlation filter (DCF) trackers garnered significant interest owing to their high accuracy and low computing cost. Many researchers have introduced spatial and temporal regularization into the DCF framework to achieve a more robust appearance model and further improve tracking performance. However, these algorithms typically set fixed spatial and temporal regularization parameters, which limit flexibility and adaptability under cluttered and challenging scenarios. To overcome these problems, in this work, we propose a new dynamic spatial-temporal regularization for the DCF tracking model that emphasizes the filter to concentrate on more reliable regions during the training stage. Furthermore, we present a response deviation-suppressed regularization term for responses to encourage temporal consistency and avoid model degradation by suppressing relative response changes between two consecutive frames. Moreover, we introduce a multi-memory tracking framework to exploit various features and each memory contributes to tracking the target across all frames. Significant experiments on the OTB-2013, OTB-2015, TC-128, UAV-123, UAVDT, and DTB-70 datasets have revealed that the performance thereof outperformed many state-of-the-art trackers based on DCF and deep-based frameworks in terms of tracking accuracy and tracking success rate.& COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:360 / 379
页数:20
相关论文
共 62 条
[1]   Staple: Complementary Learners for Real-Time Tracking [J].
Bertinetto, Luca ;
Valmadre, Jack ;
Golodetz, Stuart ;
Miksik, Ondrej ;
Torr, Philip H. S. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1401-1409
[2]   Unveiling the Power of Deep Tracking [J].
Bhat, Goutam ;
Johnander, Joakim ;
Danelljan, Martin ;
Khan, Fahad Shahbaz ;
Felsberg, Michael .
COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 :493-509
[3]  
Bolme DS, 2010, PROC CVPR IEEE, P2544, DOI 10.1109/CVPR.2010.5539960
[4]   Visual Tracking via Adaptive Spatially-Regularized Correlation Filters [J].
Dai, Kenan ;
Wang, Dong ;
Lu, Huchuan ;
Sun, Chong ;
Li, Jianhua .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4665-4674
[5]  
DANELLJAN M., 2014, BRIT MACH VIS NOTT S
[6]   ATOM: Accurate Tracking by Overlap Maximization [J].
Danelljan, Martin ;
Bhat, Goutam ;
Khan, Fahad Shahbaz ;
Felsberg, Michael .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4655-4664
[7]   Learning Spatially Regularized Correlation Filters for Visual Tracking [J].
Danelljan, Martin ;
Hager, Gustav ;
Khan, Fahad Shahbaz ;
Felsberg, Michael .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :4310-4318
[8]   Learning Dynamic Spatial-Temporal Regularization for UAV Object Tracking [J].
Deng, Chenwei ;
He, Shuangcheng ;
Han, Yuqi ;
Zhao, Boya .
IEEE SIGNAL PROCESSING LETTERS, 2021, 28 :1230-1234
[9]   The Unmanned Aerial Vehicle Benchmark: Object Detection and Tracking [J].
Du, Dawei ;
Qi, Yuankai ;
Yu, Hongyang ;
Yang, Yifan ;
Duan, Kaiwen ;
Li, Guorong ;
Zhang, Weigang ;
Huang, Qingming ;
Tian, Qi .
COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 :375-391
[10]   Learning spatial variance-key surrounding-aware tracking via multi-expert deep feature fusion [J].
Elayaperumal, Dinesh ;
Joo, Young Hoon .
INFORMATION SCIENCES, 2023, 629 :502-519