Real-time depth completion based on LiDAR-stereo for autonomous driving

被引:0
作者
Wei, Ming [1 ,2 ]
Zhu, Ming [1 ]
Zhang, Yaoyuan [1 ,2 ]
Wang, Jiarong [1 ]
Sun, Jiaqi [1 ,2 ]
机构
[1] Chinese Acad Sci, Changchun Inst Opt Fine Mech & Phys, Changchun, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
关键词
sensor fusion; depth completion; point cloud; autonomous driving; LiDAR-stereo; NETWORK;
D O I
10.3389/fnbot.2023.1124676
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The integration of multiple sensors is a crucial and emerging trend in the development of autonomous driving technology. The depth image obtained by stereo matching of the binocular camera is easily influenced by environment and distance. The point cloud of LiDAR has strong penetrability. However, it is much sparser than binocular images. LiDAR-stereo fusion can neutralize the advantages of the two sensors and maximize the acquisition of reliable three-dimensional information to improve the safety of automatic driving. Cross-sensor fusion is a key issue in the development of autonomous driving technology. This study proposed a real-time LiDAR-stereo depth completion network without 3D convolution to fuse point clouds and binocular images using injection guidance. At the same time, a kernel-connected spatial propagation network was utilized to refine the depth. The output of dense 3D information is more accurate for autonomous driving. Experimental results on the KITTI dataset showed that our method used real-time techniques effectively. Further, we demonstrated our solution's ability to address sensor defects and challenging environmental conditions using the p-KITTI dataset.
引用
收藏
页数:16
相关论文
共 41 条
[1]  
Badino H., 2011, 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), P405, DOI 10.1109/3DIMPVT.2011.58
[2]   Estimating Depth from RGB and Sparse Sensing [J].
Chen, Zhao ;
Badrinarayanan, Vijay ;
Drozdov, Gilad ;
Rabinovich, Andrew .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :176-192
[3]  
Cheng X., 2019, ARXIV, DOI [10.1609/aaai.v34i07.6635, DOI 10.1609/AAAI.V34I07.6635]
[4]   Learning Depth with Convolutional Spatial Propagation Network [J].
Cheng, Xinjing ;
Wang, Peng ;
Yang, Ruigang .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (10) :2361-2379
[5]   Noise-Aware Unsupervised Deep Lidar-Stereo Fusion [J].
Cheng, Xuelian ;
Zhong, Yiran ;
Dai, Yuchao ;
Ji, Pan ;
Li, Hongdong .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :6332-6341
[6]   Volumetric Propagation Network: Stereo-LiDAR Fusion for Long-Range Depth Estimation [J].
Choe, Jaesung ;
Joo, Kyungdon ;
Imtiaz, Tooba ;
Kweon, In So .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03) :4672-4679
[7]   Stereo-augmented Depth Completion from a Single RGB-LiDAR image [J].
Choi, Keunhoon ;
Jeong, Somi ;
Kim, Youngjung ;
Sohn, Kwanghoon .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :13641-13647
[8]   LiDAR - Stereo Camera Fusion for Accurate Depth Estimation [J].
Cholakkal, Hafeez Husain ;
Mentasti, Simone ;
Bersani, Mattia ;
Arrigoni, Stefano ;
Matteucci, Matteo ;
Cheli, Federico .
2020 AEIT INTERNATIONAL CONFERENCE OF ELECTRICAL AND ELECTRONIC TECHNOLOGIES FOR AUTOMOTIVE (AEIT AUTOMOTIVE), 2020,
[9]   Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review [J].
Cui, Yaodong ;
Chen, Ren ;
Chu, Wenbo ;
Chen, Long ;
Tian, Daxin ;
Li, Ying ;
Cao, Dongpu .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) :722-739
[10]   Learning Spatiotemporal Features with 3D Convolutional Networks [J].
Du Tran ;
Bourdev, Lubomir ;
Fergus, Rob ;
Torresani, Lorenzo ;
Paluri, Manohar .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :4489-4497