Learning Localization-Aware Target Confidence for Siamese Visual Tracking

被引:28
作者
Nie, Jiahao [1 ]
He, Zhiwei [1 ]
Yang, Yuxiang [2 ]
Gao, Mingyu [1 ]
Dong, Zhekang [3 ]
机构
[1] Hangzhou Dianzi Univ, Sch Elect Informat, Hangzhou 310018, Peoples R China
[2] Univ Sci & Technol China, Sch Control Sci & Engn, Hefei 230052, Peoples R China
[3] Zhejiang Univ, Sch Elect Engn, Hangzhou 310058, Peoples R China
基金
中国国家自然科学基金;
关键词
Target tracking; Task analysis; Feature extraction; Training; Location awareness; Visualization; Smoothing methods; Localization-aware components; Siamese tracking paradigm; task misalignment; OBJECT TRACKING;
D O I
10.1109/TMM.2022.3206668
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Siamese tracking paradigm has achieved great success, providing effective appearance discrimination and size estimation by classification and regression. While such a paradigm typically optimizes the classification and regression independently, leading to task misalignment (accurate prediction boxes have no high target confidence scores). In this paper, to alleviate this misalignment, we propose a novel tracking paradigm, called SiamLA. Within this paradigm, a series of simple, yet effective localization-aware components are introduced to generate localization-aware target confidence scores. Specifically, with the proposed localization-aware dynamic label (LADL) loss and localization-aware label smoothing (LALS) strategy, collaborative optimization between the classification and regression is achieved, enabling classification scores to be aware of location state, not just appearance similarity. Besides, we propose a separate localization-aware quality prediction (LAQP) branch to produce location quality scores to further modify the classification scores. To guide a more reliable modification, a novel localization-aware feature aggregation (LAFA) module is designed and embedded into this branch. Consequently, the resulting target confidence scores are more discriminative for the location state, allowing accurate prediction boxes tend to be predicted as high scores. Extensive experiments are conducted on six challenging benchmarks, including GOT10 k, TrackingNet, LaSOT, TNL2K, OTB100 and VOT2018. Our SiamLA achieves competitive performance in terms of both accuracy and efficiency. Furthermore, a stability analysis reveals that our tracking paradigm is relatively stable, implying that the paradigm is potential for real-world applications.
引用
收藏
页码:6194 / 6206
页数:13
相关论文
共 63 条
[1]   Fully-Convolutional Siamese Networks for Object Tracking [J].
Bertinetto, Luca ;
Valmadre, Jack ;
Henriques, Joao F. ;
Vedaldi, Andrea ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 :850-865
[2]   Transformer Tracking [J].
Chen, Xin ;
Yan, Bin ;
Zhu, Jiawen ;
Wang, Dong ;
Yang, Xiaoyun ;
Lu, Huchuan .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8122-8131
[3]   Siamese Box Adaptive Network for Visual Tracking [J].
Chen, Zedu ;
Zhong, Bineng ;
Li, Guorong ;
Zhang, Shengping ;
Ji, Rongrong .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6667-6676
[4]   Siamese Cascaded Region Proposal Networks for Real-Time Visual Tracking [J].
Fan, Heng ;
Ling, Haibin .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7944-7953
[5]   LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking [J].
Fan, Heng ;
Lin, Liting ;
Yang, Fan ;
Chu, Peng ;
Deng, Ge ;
Yu, Sijia ;
Bai, Hexin ;
Xu, Yong ;
Liao, Chunyuan ;
Ling, Haibin .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5369-5378
[6]   Learning dual-margin model for visual tracking [J].
Fan, Nana ;
Li, Xin ;
Zhou, Zikun ;
Liu, Qiao ;
He, Zhenyu .
NEURAL NETWORKS, 2021, 140 (140) :344-354
[7]   STMTrack: Template-free Visual Tracking with Space-time Memory Networks [J].
Fu, Zhihong ;
Liu, Qingjie ;
Fu, Zehua ;
Wang, Yunhong .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :13769-13778
[8]  
Galoogahi HK, 2017, IEEE I CONF COMP VIS, P1144, DOI [10.1109/ICCV.2017.129, 10.1109/ICCV.2017.128]
[9]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448
[10]   Graph Attention Tracking [J].
Guo, Dongyan ;
Shao, Yanyan ;
Cui, Ying ;
Wang, Zhenhua ;
Zhang, Liyan ;
Shen, Chunhua .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :9538-9547