Volumetric Propagation Network: Stereo-LiDAR Fusion for Long-Range Depth Estimation

被引:38
作者
Choe, Jaesung [1 ]
Joo, Kyungdon [2 ,3 ]
Imtiaz, Tooba [4 ]
Kweon, In So [4 ]
机构
[1] Korea Adv Inst Sci & Technol, Div Future Vehicle, Daejeon 34141, South Korea
[2] UNIST, Artificial Intelligence Grad Sch, Ulsan 44919, South Korea
[3] UNIST, Dept Comp Sci, Ulsan 44919, South Korea
[4] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon 34141, South Korea
关键词
Three-dimensional displays; Feature extraction; Cameras; Estimation; Laser radar; Robot sensing systems; Two dimensional displays; Autonomous driving; depth estimation; sensor fusion; stereo -LiDAR fusion;
D O I
10.1109/LRA.2021.3068712
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Stereo-LiDAR fusion is a promising task in that we can utilize two different types of 3D perceptions for practical usage - dense 3D information (stereo cameras) and highly-accurate sparse point clouds (LiDAR). However, due to their different modalities and structures, the method of aligning sensor data is the key for successful sensor fusion. To this end, we propose a geometry-aware stereo-LiDAR fusion network for long-range depth estimation, called volumetric propagation network. The key idea of our network is to exploit sparse and accurate point clouds as a cue for guiding correspondences of stereo images in a unified 3D volume space. Unlike existing fusion strategies, we directly embed point clouds into the volume, which enables us to propagate valid information into nearby voxels in the volume, and to reduce the uncertainty of correspondences. Thus, it allows us to fuse two different input modalities seamlessly and regress a long-range depth map. Our fusion is further enhanced by a newly proposed feature extraction layer for point clouds guided by images: FusionConv. FusionConv extracts point cloud features that consider both semantic (2D image domain) and geometric (3D domain) relations and aid fusion at the volume. Our network achieves state-of-the-art performance on KITTI and Virtual-KITTI datasets among recent stereo-LiDAR fusion methods.
引用
收藏
页码:4672 / 4679
页数:8
相关论文
共 35 条
[1]  
[Anonymous], 2019, P INT C LEARN REPR
[2]   3D Semantic Parsing of Large-Scale Indoor Spaces [J].
Armeni, Iro ;
Sener, Ozan ;
Zamir, Amir R. ;
Jiang, Helen ;
Brilakis, Ioannis ;
Fischer, Martin ;
Savarese, Silvio .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1534-1543
[3]  
Cabon Yohann, 2020, ARXIV200110773
[4]   Pyramid Stereo Matching Network [J].
Chang, Jia-Ren ;
Chen, Yong-Sheng .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :5410-5418
[5]  
Chen Y., 2019, IEEE INT C COMP VIS
[6]  
Cheng XJ, 2020, AAAI CONF ARTIF INTE, V34, P10615
[7]  
Choe J., ROBOTICS SCI SYST
[8]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[9]  
Eigen D, 2014, ADV NEUR IN, V27
[10]  
Funkhouser T., 2015, ShapeNet: An information-rich 3D model reposi