Learning the Incremental Warp for 3D Vehicle Tracking in LiDAR Point Clouds

被引:4
|
作者
Tian, Shengjing [1 ]
Liu, Xiuping [1 ]
Liu, Meng [2 ]
Bian, Yuhao [1 ]
Gao, Junbin [3 ]
Yin, Baocai [4 ]
机构
[1] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[2] Shan Dong Jianzhu Univ, Sch Comp & Technol, Jinan 250101, Peoples R China
[3] Univ Sydney, Business Sch, Discipline Business Analyt, Sydney, NSW 2006, Australia
[4] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
基金
中国国家自然科学基金;
关键词
point clouds; 3D tracking; state estimation; Siamese network; deep LK; OBJECT TRACKING;
D O I
10.3390/rs13142770
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Object tracking from LiDAR point clouds, which are always incomplete, sparse, and unstructured, plays a crucial role in urban navigation. Some existing methods utilize a learned similarity network for locating the target, immensely limiting the advancements in tracking accuracy. In this study, we leveraged a powerful target discriminator and an accurate state estimator to robustly track target objects in challenging point cloud scenarios. Considering the complex nature of estimating the state, we extended the traditional Lucas and Kanade (LK) algorithm to 3D point cloud tracking. Specifically, we propose a state estimation subnetwork that aims to learn the incremental warp for updating the coarse target state. Moreover, to obtain a coarse state, we present a simple yet efficient discrimination subnetwork. It can project 3D shapes into a more discriminatory latent space by integrating the global feature into each point-wise feature. Experiments on KITTI and PandaSet datasets showed that compared with the most advanced of other methods, our proposed method can achieve significant improvements-in particular, up to 13.68% on KITTI.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Deep Learning for 3D Point Clouds: A Survey
    Guo, Yulan
    Wang, Hanyun
    Hu, Qingyong
    Liu, Hao
    Liu, Li
    Bennamoun, Mohammed
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (12) : 4338 - 4364
  • [2] 3D Siamese Transformer Network for Single Object Tracking on Point Clouds
    Hui, Le
    Wang, Lingpeng
    Tang, Linghua
    Lan, Kaihao
    Xie, Jin
    Yang, Jian
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 293 - 310
  • [3] 3D SaccadeNet: A Single-Shot 3D Object Detector for LiDAR Point Clouds
    Wen, Lihua
    Vo, Xuan-Thuy
    Jo, Kang-Hyun
    2020 20TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS), 2020, : 1225 - 1230
  • [4] Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle
    Wang, Heng
    Wang, Bin
    Liu, Bingbing
    Meng, Xiaoli
    Yang, Guanghong
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2017, 88 : 71 - 78
  • [5] The Art of Point Clouds 3D LiDAR Scanning and Photogrammetry in Science & Art
    Ivsic, Lucija
    Rajcic, Nina
    McCormack, Jon
    Dziekan, Vince
    ARTECH 2021: PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON DIGITAL AND INTERACTIVE ARTS, 2021,
  • [6] Particle Filter Based Object Tracking of 3D Sparse Point Clouds for Autopilot
    Du, Yu
    Wei ShangGuan
    Chai, LinGuo
    2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 1102 - 1107
  • [7] CMT: Context-Matching-Guided Transformer for 3D Tracking in Point Clouds
    Guo, Zhiyang
    Mao, Yunyao
    Zhou, Wengang
    Wang, Min
    Li, Houqiang
    COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 : 95 - 111
  • [8] COMPARISON OF 2D AND 3D APPROACHES FOR THE ALIGNMENT OF UAV AND LIDAR POINT CLOUDS
    Persad, Ravi Ancil
    Armenakis, Costas
    INTERNATIONAL CONFERENCE ON UNMANNED AERIAL VEHICLES IN GEOMATICS (VOLUME XLII-2/W6), 2017, 42-2 (W6): : 275 - 279
  • [9] SPAN: siampillars attention network for 3D object tracking in point clouds
    Zhuang, Yi
    Zhao, Haitao
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (08) : 2105 - 2117
  • [10] SPAN: siampillars attention network for 3D object tracking in point clouds
    Yi Zhuang
    Haitao Zhao
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 2105 - 2117