Deep Sparse Depth Completion Using Joint Depth and Normal Estimation

被引:0
作者
Li, Ying [1 ]
Jung, Cheolkon [1 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian, Shaanxi, Peoples R China
来源
2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS | 2023年
基金
中国国家自然科学基金;
关键词
Depth completion; adversarial learning; discriminator; generator; LiDAR; surface normal; OBJECT DETECTION;
D O I
10.1109/ISCAS46773.2023.10181618
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Depth completion densifies sparse depth images obtained from LiDAR and is a great challenge due to the given extremely sparse information. In this paper, we propose deep sparse depth completion using joint depth and normal estimation. There exists a mutually convertible geometric relationship between depth and surface normal in 3D coordinate space. Based on the geometric relationship, we build a novel adversarial model that consists of one generator and two discriminators. We adopt an encoder-decoder structure for the generator. The encoder extracts features from RGB image, sparse depth image and its binary mask that represent the inherent geometric relationship between depth and surface normal, while two decoders with the same structure generate dense depth and surface normal based on the geometric relationship. We utilize two discriminators to generate guide information for sparse depth completion from the input RGB image while imposing an auxiliary geometric constraint for depth refinement. Experimental results on KITTI dataset show that the proposed method generates dense depth images with accurate object boundaries and outperforms state-of-the-art ones in terms of visual quality and quantitative measurements.
引用
收藏
页数:5
相关论文
共 28 条
[1]   Feature Interaction Augmented Sparse Learning for Fast Kinect Motion Detection [J].
Chang, Xiaojun ;
Ma, Zhigang ;
Lin, Ming ;
Yang, Yi ;
Hauptmann, Alexander G. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (08) :3911-3920
[2]   Towards Scene Understanding: Unsupervised Monocular Depth Estimation with Semantic-aware Representation [J].
Chen, Po-Yi ;
Liu, Alexander H. ;
Liu, Yen-Cheng ;
Wang, Yu-Chiang Frank .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2619-2627
[3]   Learning Depth with Convolutional Spatial Propagation Network [J].
Cheng, Xinjing ;
Wang, Peng ;
Yang, Ruigang .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (10) :2361-2379
[4]  
Geiger A., 2012, 2012 IEEE C COMP VIS
[5]  
Goodfellow I. J., 2014, arXiv:1406.2661
[6]   PENet: Towards Precise and Efficient Image Guided Depth Completion [J].
Hu, Mu ;
Wang, Shuling ;
Li, Bin ;
Ning, Shiyu ;
Fan, Li ;
Gong, Xiaojin .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :13656-13662
[7]   FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks [J].
Ilg, Eddy ;
Mayer, Nikolaus ;
Saikia, Tonmoy ;
Keuper, Margret ;
Dosovitskiy, Alexey ;
Brox, Thomas .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1647-1655
[8]  
Gulrajani I, 2017, ADV NEUR IN, V30
[9]  
ISOLA P, 2017, PROC CVPR IEEE, P5967, DOI [DOI 10.1109/CVPR.2017.632, 10.1109/CVPR.2017.632]
[10]   Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation [J].
Jaritz, Maximilian ;
de Charette, Raoul ;
Wirbel, Emilie ;
Perrotton, Xavier ;
Nashashibi, Fawzi .
2018 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2018, :52-60