Time3D: End-to-End Joint Monocular 3D Object Detection and Tracking for Autonomous Driving

被引:36
作者
Li, Peixuan [1 ]
Jin, Jieyu [1 ]
机构
[1] SAIC PP CEM, Shanghai, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.00386
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While separately leveraging monocular 3D object detection and 2D multi-object tracking can be straightforwardly applied to sequence images in a frame-by-frame fashion, stand-alone tracker cuts off the transmission of the uncertainty from the 3D detector to tracking while cannot pass tracking error differentials back to the 3D detector. In this work, we propose jointly training 3D detection and 3D tracking from only monocular videos in an end-to-end manner. The key component is a novel spatial-temporal information flow module that aggregates geometric and appearance features to predict robust similarity scores across all objects in current and past frames. Specifically, we leverage the attention mechanism of the transformer, in which self-attention aggregates the spatial information in a specific frame, and cross-attention exploits relation and affinities of all objects in the temporal domain of sequence frames. The affinities are then supervised to estimate the trajectory and guide the flow of information between corresponding 3D objects. In addition, we propose a temporal -consistency loss that explicitly involves 3D target motion modeling into the learning, making the 3D trajectory smooth in the world coordinate system. Time3D achieves 21.4% AMOTA, 13.6% AMOTP on the nuScenes 3D tracking benchmark, surpassing all published competitors, and running at 38 FPS, while Time3D achieves 31.2% mAP, 39.4% NDS on the nuScenes 3D detection benchmark.
引用
收藏
页码:3875 / 3884
页数:10
相关论文
共 46 条
[1]   Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking [J].
Bae, Seung-Hwan ;
Yoon, Kuk-Jin .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :595-610
[2]  
Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
[3]   Learning a Neural Solver for Multiple Object Tracking [J].
Braso, Guillem ;
Leal-Taixe, Laura .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6246-6256
[4]  
Brazil G., 2019, P IEEE INT C COMP VI
[5]  
Brazil Garrick, 2020, LECT NOTES COMPUTER, V12368, P135
[6]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[7]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[8]  
Chaabane Mohamed, 2021, CORR
[9]  
Dong T, 2013, 2013 7TH INTERNATIONAL CONGRESS ON ADVANCED ELECTROMAGNETIC MATERIALS IN MICROWAVES AND OPTICS (METAMATERIALS 2013), P31, DOI 10.1109/MetaMaterials.2013.6808943
[10]  
Dosovitskiy A., 2021, P 9 INT C LEARN REPR