Future pseudo-LiDAR frame prediction for autonomous driving

被引:2
作者
Huang, Xudong [1 ,2 ]
Lin, Chunyu [1 ,2 ]
Liu, Haojie [1 ,2 ]
Nie, Lang [1 ,2 ]
Zhao, Yao [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Beijing Key Lab Adv Informat Sci & Network Techno, Beijing 100044, Peoples R China
基金
中国国家自然科学基金;
关键词
Pseudo-LiDAR prediction; Dense depth map; Autonomous driving; Convolutional neural network; Depth completion; RGB-D; DEPTH; NETWORK;
D O I
10.1007/s00530-022-00921-x
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
LiDAR sensors are widely used in autonomous driving due to the reliable 3D spatial information. However, the data of LiDAR is sparse and the frequency of LiDAR is lower than that of cameras. To generate denser point clouds spatially and temporally, we propose the first future pseudo-LiDAR frame prediction network. Given the consecutive sparse depth maps and RGB images, we first predict a future dense depth map based on dynamic motion information coarsely. To eliminate the errors of optical flow estimation, an inter-frame aggregation module is proposed to fuse the warped depth maps with adaptive weights. Then, we refine the predicted dense depth map using static contextual information. The future pseudo-LiDAR frame can be obtained by converting the predicted dense depth map into corresponding 3D point clouds. Extensive experiments are conducted in-depth completion, pseudo-LiDAR interpolation, and LiDAR prediction. Our approach achieves state-of-the-art performance on all the above tests on the popular KITTI dataset, and the primary evaluation metric RMSE reaches 1214.
引用
收藏
页码:1611 / 1620
页数:10
相关论文
共 44 条
[2]   Temporal LiDAR Frame Prediction for Autonomous Driving [J].
Deng, David ;
Zakhor, Avideh .
2020 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2020), 2020, :829-837
[3]   Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning to End [J].
Eldesokey, Abdelrahman ;
Felsberg, Michael ;
Holmquist, Karl ;
Persson, Michael .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :12011-12020
[4]   A Point Set Generation Network for 3D Object Reconstruction from a Single Image [J].
Fan, Haoqiang ;
Su, Hao ;
Guibas, Leonidas .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2463-2471
[5]  
Finn C, 2016, ADV NEUR IN, V29
[6]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237
[7]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[8]  
Herrera CD, 2013, LECT NOTES COMPUT SC, V7944, P555
[9]   PENet: Towards Precise and Efficient Image Guided Depth Completion [J].
Hu, Mu ;
Wang, Shuling ;
Li, Bin ;
Ning, Shiyu ;
Fan, Li ;
Gong, Xiaojin .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :13656-13662
[10]  
Hu XX, 2019, IEEE IMAGE PROC, P1440, DOI [10.1109/ICIP.2019.8803025, 10.1109/icip.2019.8803025]