Transformer RGBT Tracking With Spatio-Temporal Multimodal Tokens

被引:3
作者
Sun, Dengdi [1 ,2 ,3 ]
Pan, Yajie [4 ]
Lu, Andong [4 ]
Li, Chenglong [5 ]
Luo, Bin [4 ]
机构
[1] Anhui Univ, Sch Artificial Intelligence, Key Lab Intelligent Comp & Signal Proc, Minist Educ, Hefei 230601, Peoples R China
[2] Jianghuai Adv Technol Ctr, Hefei 230000, Peoples R China
[3] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei 230026, Peoples R China
[4] Anhui Univ, Sch Comp Sci & Technol, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei 230601, Peoples R China
[5] Anhui Univ, Sch Artificial Intelligence, Key Lab Intelligent Comp & Signal Proc, Minist Educ,Anhui Prov Key Lab Secur Artificial I, Hefei 230601, Peoples R China
基金
中国国家自然科学基金;
关键词
RGBT tracking; transformer; cross-modal interaction; spatio-temporal multimodal tokens; NETWORK;
D O I
10.1109/TCSVT.2024.3425455
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Many RGBT tracking researches primarily focus on modal fusion design, while overlooking the effective handling of target appearance changes. While some approaches have introduced historical frames or fuse and replace initial templates to incorporate temporal information, they have the risk of disrupting the original target appearance and accumulating errors over time. To alleviate these limitations, we propose a novel Transformer RGBT tracking approach, which mixes spatio-temporal multimodal tokens from the static multimodal templates and multimodal search regions in Transformer to handle target appearance changes, for robust RGBT tracking. We introduce independent dynamic template tokens to interact with the search region, embedding temporal information to address appearance changes, while also retaining the involvement of the initial static template tokens in the joint feature extraction process to ensure the preservation of the original reliable target appearance information that prevent deviations from the target appearance caused by traditional temporal updates. We also use attention mechanisms to enhance the target features of multimodal template tokens by incorporating supplementary modal cues, and make the multimodal search region tokens interact with multimodal dynamic template tokens via attention mechanisms, which facilitates the conveyance of multimodal-enhanced target change information. Our module is inserted into the transformer backbone network and inherits joint feature extraction, search-template matching, and cross-modal interaction. Extensive experiments on three RGBT benchmark datasets show that the proposed approach maintains competitive performance compared to other state-of-the-art tracking algorithms while running at 39.1 FPS. The project-related materials are available at: https://github.com/yinghaidada/STMT.
引用
收藏
页码:12059 / 12072
页数:14
相关论文
共 57 条
  • [1] Bertinetto L., Valmadre J., Henriques J.F., Vedaldi A., Torr P.H.S., Fully-convolutional Siamese networks for object tracking, Proc. ECCV Workshops, pp. 850-865, (2016)
  • [2] Borsuk V., Vei R., Kupyn O., Martyniuk T., Krashenyi I., Matas J., FEAR: Fast, efficient, accurate and robust visual tracker, Proc. Eur. Conf. Comput. Vis, pp. 644-663, (2022)
  • [3] Cui Y., Jiang C., Wang L., Wu G., MixFormer: End-to-end tracking with iterative mixed attention, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 13608-13618, (2022)
  • [4] Dosovitskiy A., Et al., An image is worth 16x16 words: Transformers for image recognition at scale, (2020)
  • [5] Feng M., Song K., Wang Y., Liu J., Yan Y., Learning discriminative update adaptive spatial-temporal regularized correlation filter for RGBT tracking, J. Vis. Commun. Image Represent, 72, (2020)
  • [6] Feng M., Su J., Learning reliable modal weight with transformer for robust RGBT tracking, Knowl.-Based Syst, 249, (2022)
  • [7] He K., Gkioxari G., Dollar P., Girshick R., Mask R-CNN, Proc. IEEE Int. Conf. Comput. Vis. ICCV, pp. 2961-2969, (2017)
  • [8] Hou R., Ren T., Wu G., MIRNet: A robust RGBT tracking jointly with multi-modal interaction and refinement, Proc. IEEE Int. Conf. Multimedia Expo ICME, pp. 1-6, (2022)
  • [9] Kenton J.D.M.-W.C., Toutanova L.K., BERT: Pre-training of deep bidirectional transformers for language understanding, Proc. NaaCL-HLT, 1, (2019)
  • [10] Kristan M., Et al., The seventh visual object tracking VOT2019 challenge results, Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshop ICCVW, pp. 2206-2241, (2019)