A real-time semi-dense depth-guided depth completion network

被引:1
|
作者
Xu, JieJie [1 ]
Zhu, Yisheng [1 ]
Wang, Wenqing [1 ]
Liu, Guangcan [2 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Automat, Nanjing 210044, Peoples R China
[2] Southeast Univ, Sch Automat, Nanjing 210018, Peoples R China
来源
VISUAL COMPUTER | 2024年 / 40卷 / 01期
关键词
Depth completion; Neural networks; Multi-modal fusion; SPARSE; RECONSTRUCTION; PROPAGATION;
D O I
10.1007/s00371-022-02767-w
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Depth completion, the task of predicting dense depth maps from given depth maps of sparse, is an important topic in computer vision. To cope with the task, both traditional image processing- based and data-driven deep learning-based algorithms have been established in the literature. In general, traditional algorithms, built upon non-learnable methods such as interpolation and custom kernels, can handle well flat regions but may blunt sharp edges. Deep learning-based algorithms, despite their strengths in many aspects, still have several limits, e.g., their performance depends heavily on the quality of the given sparse maps, and the dense maps they produced may contain artifacts and are often poor in terms of geometric consistency. To tackle these issues, in this work we propose a simple yet effective algorithm that aims to combine the strengths of both the traditional image processing techniques and the prevalent deep learning methods. Namely, given a sparse depth map, our algorithm first generates a semi-dense map and a 3D pose map using the adaptive densification module (ADM) and the coordinate projection module (CPM), respectively, and then input the obtained maps into a two-branch convolutional neural network so as to produce the final dense depth map. The proposed algorithm is evaluated on both challenging outdoor dataset: KITTI and indoor dataset: NYUv2, the experimental results show that our method performs better than some existing methods.
引用
收藏
页码:87 / 97
页数:11
相关论文
共 50 条
  • [31] A Depth-Guided Attention Strategy for Crowd Counting
    Chen, Hao
    Li, Zhan
    Bhanu, Bir
    Lu, Dongping
    Han, Xuming
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 25 - 37
  • [32] RigNet: Repetitive Image Guided Network for Depth Completion
    Yan, Zhiqiang
    Wang, Kun
    Li, Xiang
    Zhang, Zhenyu
    Li, Jun
    Yang, Jian
    COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 : 214 - 230
  • [33] Feature-based visual odometry prior for real-time semi-dense stereo SLAM
    Krombach, Nicola
    Droeschel, David
    Houben, Sebastian
    Behnke, Sven
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 109 : 38 - 58
  • [34] DART: dense articulated real-time tracking with consumer depth cameras
    Schmidt, Tanner
    Newcombe, Richard
    Fox, Dieter
    AUTONOMOUS ROBOTS, 2015, 39 (03) : 239 - 258
  • [35] A miniature stereo vision machine for real-time dense depth mapping
    Jia, YD
    Xu, YH
    Liu, WC
    Yang, C
    Zhu, YW
    Zhang, XX
    An, LP
    COMPUTER VISION SYSTEMS, PROCEEDINGS, 2003, 2626 : 268 - 277
  • [36] Underwater variable zoom: Depth-guided perception network for underwater image enhancement
    Huang, Zhixiong
    Wang, Xinying
    Xu, Chengpei
    Li, Jinjiang
    Feng, Lin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [37] DART: dense articulated real-time tracking with consumer depth cameras
    Tanner Schmidt
    Richard Newcombe
    Dieter Fox
    Autonomous Robots, 2015, 39 : 239 - 258
  • [38] Tracking Based Depth-guided Video Inpainting
    Hatheele, Saroj
    Zaveri, Mukesh A.
    2013 FOURTH NATIONAL CONFERENCE ON COMPUTER VISION, PATTERN RECOGNITION, IMAGE PROCESSING AND GRAPHICS (NCVPRIPG), 2013,
  • [39] Efficient Depth-Guided Urban View Synthesis
    Miao, Sheng
    Huang, Jiaxin
    Bai, Dongfeng
    Qiu, Weichao
    Liu, Bingbing
    Geiger, Andreas
    Liao, Yiyi
    COMPUTER VISION - ECCV 2024, PT XXX, 2025, 15088 : 90 - 107
  • [40] Depth-Guided Texture Diffusion for Image Semantic Segmentation
    Sun, Wei
    Li, Yuan
    Ye, Qixiang
    Jiao, Jianbin
    Zhou, Yanzhao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (02) : 1287 - 1302