Looking Beyond Two Frames: End-to-End Multi-Object Tracking Using Spatial and Temporal Transformers

被引:21
作者
Zhu, Tianyu [1 ]
Hiller, Markus [2 ]
Ehsanpour, Mahsa [3 ]
Ma, Rongkai [1 ]
Drummond, Tom [2 ]
Reid, Ian
Rezatofighi, Hamid [4 ]
机构
[1] Monash Univ, Dept Elect & Comp Syst Engn, Clayton, Vic 3800, Australia
[2] Univ Melbourne, Sch Comp & Informat Syst, Parkville, Vic 3010, Australia
[3] Univ Adelaide, Australian Inst Machine Learning, Adelaide, SA 5005, Australia
[4] Monash Univ, Dept Data Sci & AI, Clayton, Vic 3800, Australia
关键词
Tracking; Transformers; Task analysis; Visualization; Object recognition; History; Feature extraction; Multi-object tracking; transformer; spatio-temporal model; pedestrian tracking; end-to-end learning; MULTITARGET;
D O I
10.1109/TPAMI.2022.3213073
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tracking a time-varying indefinite number of objects in a video sequence over time remains a challenge despite recent advances in the field. Most existing approaches are not able to properly handle multi-object tracking challenges such as occlusion, in part because they ignore long-term temporal information. To address these shortcomings, we present MO3TR: a truly end-to-end Transformer-based online multi-object tracking (MOT) framework that learns to handle occlusions, track initiation and termination without the need for an explicit data association module or any heuristics. MO3TR encodes object interactions into long-term temporal embeddings using a combination of spatial and temporal Transformers, and recursively uses the information jointly with the input data to estimate the states of all tracked objects over time. The spatial attention mechanism enables our framework to learn implicit representations between all the objects and the objects to the measurements, while the temporal attention mechanism focuses on specific parts of past information, allowing our approach to resolve occlusions over multiple frames. Our experiments demonstrate the potential of this new approach, achieving results on par with or better than the current state-of-the-art on multiple MOT metrics for several popular multi-object tracking benchmarks.
引用
收藏
页码:12783 / 12797
页数:15
相关论文
共 26 条
  • [21] S.T.A.R.-Track: Latent Motion Models for End-to-End 3D Object Tracking With Adaptive Spatio-Temporal Appearance Representations
    Doll, Simon
    Hanselmann, Niklas
    Schneider, Lukas
    Schulz, Richard
    Enzweiler, Markus
    Lensch, Hendrik P. A.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (02) : 1326 - 1333
  • [22] Spatial-Semantic and Temporal Attention Mechanism-Based Online Multi-Object Tracking
    Meng, Fanjie
    Wang, Xinqing
    Wang, Dong
    Shao, Faming
    Fu, Lei
    SENSORS, 2020, 20 (06)
  • [23] Weighted correlation filters guidance with spatial-temporal attention for online multi-object tracking
    Tian, Sheng
    Zou, Lian
    Fan, Cian
    Chen, Liqiong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 63
  • [24] FSM-DDTR: End-to-end feedback strategy for multi-objective De Novo drug design using transformers
    Monteiro, Nelson R. C.
    Pereira, Tiago O.
    Machado, Ana Catarina D.
    Oliveira, Jose L.
    Abbasi, Maryam
    Arrais, Joel P.
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 164
  • [25] 3D-SiamRPN: An End-to-End Learning Method for Real-Time 3D Single Object Tracking Using Raw Point Cloud
    Fang, Zheng
    Zhou, Sifan
    Cui, Yubo
    Scherer, Sebastian
    IEEE SENSORS JOURNAL, 2021, 21 (04) : 4995 - 5011
  • [26] One-shot multi-object tracking using CNN-based networks with spatial-channel attention mechanism
    Li, Guofa
    Chen, Xin
    Li, Mingjun
    Li, Wenbo
    Li, Shen
    Guo, Gang
    Wang, Huaizhi
    Deng, Hao
    OPTICS AND LASER TECHNOLOGY, 2022, 153