A robust and adaptive framework with space-time memory networks for Visual Object Tracking☆

被引:0
作者
Zheng, Yu [1 ]
Liu, Yong [1 ]
Che, Xun [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, 200 Xiaolingwei, Nanjing 210094, Jiangsu, Peoples R China
关键词
Space-time memory network; Memory frames; Historical frames; Robust and adaptive extraction strategy;
D O I
10.1016/j.jvcir.2025.104431
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
These trackers based on the space-time memory network locate the target object in the search image employing contextual information from multiple memory frames and their corresponding foreground-background features. It is conceivable that these trackers are susceptible to the memory frame quality as well as the accuracy of the corresponding foreground labels. In the previous works, experienced methods are employed to obtain memory frames from historical frames, which hinders the improvement of generalization and performance. To address the above limitations, we propose a robust and adaptive extraction strategy for memory frames to ensure that the most representative historical frames are selected into the set of memory frames to increase the accuracy of localization and reduce failures due to error accumulation. Specifically, we propose an extraction network to evaluate historical frames, where historical frames with the highest score are labeled as the memory frame and conversely dropped. Qualitative and quantitative analyses were implemented on multiple datasets (OTB100, LaSOT and GOT-10K), and the proposed method obtains significant gain in performance over the previous works, especially for challenging scenarios. while bringing only a negligible inference speed degradation, otherwise, the proposed method obtains competitive results compared to other state-of-the-art (SOTA) methods.
引用
收藏
页数:9
相关论文
共 41 条
[1]   Memory Networks [J].
Becattini, Federico ;
Uricchio, Tiberio .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, :7380-7382
[2]   Fully-Convolutional Siamese Networks for Object Tracking [J].
Bertinetto, Luca ;
Valmadre, Jack ;
Henriques, Joao F. ;
Vedaldi, Andrea ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 :850-865
[3]   HIPTrack: Visual Tracking with Historical Prompts [J].
Cai, Wenrui ;
Liu, Qingjie ;
Wang, Yunhong .
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, :19258-19267
[4]   Siamese Box Adaptive Network for Visual Tracking [J].
Chen, Zedu ;
Zhong, Bineng ;
Li, Guorong ;
Zhang, Shengping ;
Ji, Rongrong .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6667-6676
[5]  
Cheng H.K., 2024, P 35 INT C NEUR INF
[6]   Learning to Filter: Siamese Relation Network for Robust Tracking [J].
Cheng, Siyuan ;
Zhong, Bineng ;
Li, Guorong ;
Liu, Xin ;
Tang, Zhenjun ;
Li, Xianxian ;
Wang, Jing .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :4419-4429
[7]   Probabilistic Regression for Visual Tracking [J].
Danelljan, Martin ;
Van Gool, Luc ;
Timofte, Radu .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :7181-7190
[8]   STMTrack: Template-free Visual Tracking with Space-time Memory Networks [J].
Fu, Zhihong ;
Liu, Qingjie ;
Fu, Zehua ;
Wang, Yunhong .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :13769-13778
[9]  
Graves A., 2014, Neural and Evolutionary Computing
[10]   Graph Attention Tracking [J].
Guo, Dongyan ;
Shao, Yanyan ;
Cui, Ying ;
Wang, Zhenhua ;
Zhang, Liyan ;
Shen, Chunhua .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :9538-9547