An Effective Motion-Centric Paradigm for 3D Single Object Tracking in Point Clouds

被引:1
作者
Zheng, Chaoda [1 ,2 ]
Yan, Xu [1 ,2 ]
Zhang, Haiming [1 ,2 ]
Wang, Baoyuan [3 ]
Cheng, Shenghui [4 ]
Cui, Shuguang [1 ,2 ]
Li, Zhen [1 ,2 ]
机构
[1] Chinese Univ Hong Kong, Future Network Intelligence Inst FNii, Shenzhen 518172, Peoples R China
[2] Chinese Univ Hong Kong, Sch Sci & Engn SSE, Shenzhen 518172, Peoples R China
[3] Xiaobing AI, Beijing 100032, Peoples R China
[4] Westlake Univ, Hangzhou 310024, Zhejiang, Peoples R China
关键词
Single object tracking; point cloud; LiDAR; motion; semi-supervised learning;
D O I
10.1109/TPAMI.2023.3324372
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
3D single object tracking in LiDAR point clouds (LiDAR SOT) plays a crucial role in autonomous driving. Current approaches all follow the Siamese paradigm based on appearance matching. However, LiDAR point clouds are usually textureless and incomplete, which hinders effective appearance matching. Besides, previous methods greatly overlook the critical motion clues among targets. In this work, beyond 3D Siamese tracking, we introduce a motion-centric paradigm to handle LiDAR SOT from a new perspective. Following this paradigm, we propose a matching-free two-stage tracker M-2-Track. At the 1st-stage, M-2-Track localizes the target within successive frames via motion transformation. Then it refines the target box through motion-assisted shape completion at the 2nd-stage. Due to the motion-centric nature, our method shows its impressive generalizability with limited training labels and provides good differentiability for end-to-end cycle training. This inspires us to explore semi-supervised LiDAR SOT by incorporating a pseudo-label-based motion augmentation and a self-supervised loss term. Under the fully-supervised setting, extensive experiments confirm that M-2-Track significantly outperforms previous state-of-the-arts on three large-scale datasets while running at 57FPS (similar to 3%, similar to 11% and similar to 22% precision gains on KITTI, NuScenes, and Waymo Open Dataset respectively). While under the semi-supervised setting, our method performs on par with or even surpasses its fully-supervised counterpart using fewer than half of the labels from KITTI. Further analysis verifies each component's effectiveness and shows the motion-centric paradigm's promising potential for auto-labeling and unsupervised domain adaptation.
引用
收藏
页码:43 / 60
页数:18
相关论文
共 64 条
  • [1] Bhat Goutam, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12368), P205, DOI 10.1007/978-3-030-58592-1_13
  • [2] Learning Discriminative Model Prediction for Tracking
    Bhat, Goutam
    Danelljan, Martin
    Van Gool, Luc
    Timofte, Radu
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6181 - 6190
  • [3] Caesar H, 2020, PROC CVPR IEEE, P11618, DOI 10.1109/CVPR42600.2020.01164
  • [4] Cheng MM, 2021, Arxiv, DOI [arXiv:2104.07861, 10.48550/arXiv.2104.07861]
  • [5] Chiu HK, 2020, Arxiv, DOI arXiv:2001.05673
  • [6] Chung JY, 2014, Arxiv, DOI [arXiv:1412.3555, 10.48550/arXiv.1412.3555]
  • [7] Cui Y., 2021, arXiv
  • [8] Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos
    Fan, Hehe
    Yang, Yi
    Kankanhalli, Mohan
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 14199 - 14208
  • [9] LiDAR-Aug: A General Rendering-based Augmentation Framework for 3D Object Detection
    Fang, Jin
    Zuo, Xinxin
    Zhou, Dingfu
    Jin, Shengze
    Wang, Sen
    Zhang, Liangjun
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 4708 - 4718
  • [10] 3D-SiamRPN: An End-to-End Learning Method for Real-Time 3D Single Object Tracking Using Raw Point Cloud
    Fang, Zheng
    Zhou, Sifan
    Cui, Yubo
    Scherer, Sebastian
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (04) : 4995 - 5011