Separable Self and Mixed Attention Transformers for Efficient Object Tracking

被引:25
作者
Gopal, Goutam Yelluru [1 ]
Amer, Maria A. [1 ]
机构
[1] Concordia Univ, Dept Elect & Comp Engn, Montreal, PQ, Canada
来源
2024 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION, WACV 2024 | 2024年
关键词
D O I
10.1109/WACV57701.2024.00657
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The deployment of transformers for visual object tracking has shown state-of-the-art results on several benchmarks. However, the transformer-based models are underutilized for Siamese lightweight tracking due to the computational complexity of their attention blocks. This paper proposes an efficient self and mixed attention transformer-based architecture for lightweight tracking. The proposed backbone utilizes the separable mixed attention transformers to fuse the template and search regions during feature extraction to generate superior feature encoding. Our prediction head performs global contextual modeling of the encoded features by leveraging efficient self-attention blocks for robust target state estimation. With these contributions, the proposed lightweight tracker deploys a transformer-based backbone and head module concurrently for the first time. Our ablation study testifies to the effectiveness of the proposed combination of backbone and head modules. Simulations show that our Separable Self and Mixed Attention-based Tracker, SMAT, surpasses the performance of related lightweight trackers on GOT10k, TrackingNet, LaSOT, NfS30, UAV123, and AVisT datasets, while running at 37 fps on CPU, 158 fps on GPU, and having 3.8M parameters. For example, it significantly surpasses the closely related trackers E.T.Track and MixFormerV2-S on GOT10ktest by a margin of 7.9% and 5.8%, respectively, in the AO metric. The tracker code and model is available at https://github.com/goutamyg/SMAT.
引用
收藏
页码:6694 / 6703
页数:10
相关论文
共 40 条
[1]   Efficient Visual Tracking with Exemplar Transformers [J].
Blatter, Philippe ;
Kanakis, Menelaos ;
Danelljan, Martin ;
Van Gool, Luc .
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, :1571-1581
[2]   FEAR: Fast, Efficient, Accurate and Robust Visual Tracker [J].
Borsuk, Vasyl ;
Vei, Roman ;
Kupyn, Orest ;
Martyniuk, Tetiana ;
Krashenyi, Igor ;
Matas, Jiri .
COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 :644-663
[3]   HiFT: Hierarchical Feature Transformer for Aerial Tracking [J].
Cao, Ziang ;
Fu, Changhong ;
Ye, Junjie ;
Li, Bowen ;
Li, Yiming .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :15437-15446
[4]   Visual Object Tracking Performance Measures Revisited [J].
Cehovin, Luka ;
Leonardis, Ales ;
Kristan, Matej .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (03) :1261-1274
[5]  
Chen Xin, 2023, Computer Vision - ECCV 2022 Workshops: Proceedings. Lecture Notes in Computer Science (13808), P461, DOI 10.1007/978-3-031-25085-9_26
[6]   SeqTrack: Sequence to Sequence Learning for Visual Object Tracking [J].
Chen, Xin ;
Peng, Houwen ;
Wang, Dong ;
Lu, Huchuan ;
Hu, Han .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :14572-14581
[7]   Transformer Tracking [J].
Chen, Xin ;
Yan, Bin ;
Zhu, Jiawen ;
Wang, Dong ;
Yang, Xiaoyun ;
Lu, Huchuan .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8122-8131
[8]  
Chen Yukang, 2019, NEURIPS, V32
[9]   Ecoenzymatic stoichiometry reveals widespread soil phosphorus limitation to microbial metabolism across Chinese forests [J].
Cui, Yongxing ;
Bing, Haijian ;
Moorhead, Daryl L. ;
Delgado-Baquerizo, Manuel ;
Ye, Luping ;
Yu, Jialuo ;
Zhang, Shangpeng ;
Wang, Xia ;
Peng, Shushi ;
Guo, Xue ;
Zhu, Biao ;
Chen, Ji ;
Tan, Wenfeng ;
Wang, Yunqiang ;
Zhang, Xingchang ;
Fang, Linchuan .
COMMUNICATIONS EARTH & ENVIRONMENT, 2022, 3 (01)
[10]  
Cui Yutao, 2023, arXiv