S.T.A.R.-Track: Latent Motion Models for End-to-End 3D Object Tracking With Adaptive Spatio-Temporal Appearance Representations

被引:0
作者
Doll, Simon [1 ,2 ]
Hanselmann, Niklas [1 ]
Schneider, Lukas [2 ]
Schulz, Richard [2 ]
Enzweiler, Markus [3 ]
Lensch, Hendrik P. A. [2 ]
机构
[1] Mercedes Benz AG, D-71063 Sindelfingen, Germany
[2] Univ Tubingen, D-72074 Tubingen, Germany
[3] Esslingen Univ Appl Sci, Inst Intelligent Syst, D-73732 Esslingen, Germany
关键词
Tracking; Detectors; Three-dimensional displays; Solid modeling; Cameras; Feature extraction; Transformers; Visual Tracking; Deep Learning for Visual Perception; Autonomous Vehicle Navigation;
D O I
10.1109/LRA.2023.3342552
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Following the tracking-by-attention paradigm, this letter introduces an object-centric, transformer-based framework for tracking in 3D. Traditional model-based tracking approaches incorporate the geometric effect of object- and ego motion between frames with a geometric motion model. Inspired by this, we propose STAR-TRACK, which uses a novel latent motion model (LMM) to additionally adjust object queries to account for changes in viewing direction and lighting conditions directly in the latent space, while still modeling the geometric motion explicitly. Combined with a novel learnable track embedding that aids in modeling the existence probability of tracks, this results in a generic tracking framework that can be integrated with any query-based detector. Extensive experiments on the nuScenes benchmark demonstrate the benefits of our approach, showing state-of-the-art (SOTA) performance for DETR3D-based trackers while drastically reducing the number of identity switches of tracks at the same time.
引用
收藏
页码:1326 / 1333
页数:8
相关论文
共 41 条
  • [1] [Anonymous], nuScenes tracking task
  • [2] [Anonymous], 2022, MUTR3D evaluation github issue #15
  • [3] Self-driving cars: A survey
    Badue, Claudine
    Guidolini, Ranik
    Carneiro, Raphael Vivacqua
    Azevedo, Pedro
    Cardoso, Vinicius B.
    Forechi, Avelino
    Jesus, Luan
    Berriel, Rodrigo
    Paixao, Thiago M.
    Mutz, Filipe
    Veronese, Lucas de Paula
    Oliveira-Santos, Thiago
    De Souza, Alberto F.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165
  • [4] TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers
    Bai, Xuyang
    Hu, Zeyu
    Zhu, Xinge
    Huang, Qingqiu
    Chen, Yilun
    Fu, Hangbo
    Tai, Chiew-Lan
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1080 - 1089
  • [5] Caesar H, 2020, PROC CVPR IEEE, P11618, DOI 10.1109/CVPR42600.2020.01164
  • [6] Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13
  • [7] Deep learning in video multi-object tracking: A survey
    Ciaparrone, Gioele
    Luque Sanchez, Francisco
    Tabik, Siham
    Troiano, Luigi
    Tagliaferri, Roberto
    Herrera, Francisco
    [J]. NEUROCOMPUTING, 2020, 381 : 61 - 88
  • [8] SpatialDETR: Robust Scalable Transformer-Based 3D Object Detection From Multi-view Camera Images With Global Cross-Sensor Attention
    Doll, Simon
    Schulz, Richard
    Schneider, Lukas
    Benzin, Viviane
    Enzweiler, Markus
    Lensch, Hendrik P. A.
    [J]. COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 230 - 245
  • [9] Li3DeTr: A LiDAR based 3D Detection Transformer
    Erabati, Gopi Krishna
    Araujo, Helder
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 4239 - 4248
  • [10] Object Detection and Tracking for Autonomous Navigation in Dynamic Environments
    Ess, Andreas
    Schindler, Konrad
    Leibe, Bastian
    Van Gool, Luc
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2010, 29 (14) : 1707 - 1725