Single image depth estimation using improved U-Net and edge-guide loss

被引:0
|
作者
He M. [1 ,2 ,3 ]
Gao Y. [1 ,2 ,3 ]
Long Y. [1 ,2 ,3 ]
机构
[1] College of Mechanical and Electronic Engineering, Northwest A & F University, Shaanxi, Yangling
[2] Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture and Rural Affairs, Shaanxi, Yangling
[3] Shaanxi Key Laboratory of Agricultural Information Perception and Intelligent Service, Shaanxi, Yangling
关键词
Deep learning; Depth estimation; Edge-guide loss; Hybrid dilated convolution;
D O I
10.1007/s11042-024-19235-3
中图分类号
学科分类号
摘要
Monocular depth estimation is regarded as a critical link in context-aware scene comprehension, which typically uses image data from a single point of view as the input to directly predict the depth value corresponding to each pixel in the image. However, predicting accurate object borders without replicating texture is difficult, resulting in missing tiny objects and blurry object edge in predicted depth images. In this paper, we propose a method for estimating monocular depth using an improved U-Net-based encoder-decoder network structure. We propose a new training loss term called edge-guide loss, which pushes the network to focus on object edges, resulting in better accuracy of the depth of tiny objects and edges. In the network, we build the encoder using DenseNet-169 and the decoder using 2 × bilinear up-sampling, skip-connections and hybrid dilated convolution. And skip-connections are used to send multi-scale feature maps from encoder to decoder. We specifically create a new loss function, edge-guide loss and three basic loss terms. We test our algorithm on the NYU Depth V2 dataset. The results of the experiments show that the proposed network can create depth image from a single RGB image with unambiguous borders and more tiny object depth. In the meantime, compared with state-of-the-art approaches, our proposed network outperforms for both visual quality and objective measurement. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
引用
收藏
页码:84619 / 84637
页数:18
相关论文
共 50 条
  • [1] Optimizing depth estimation with attention U-Net
    Farooq, Huma
    Chachoo, Manzoor Ahmad
    Bhat, Sajid Yousuf
    INTERNATIONAL JOURNAL OF SYSTEM ASSURANCE ENGINEERING AND MANAGEMENT, 2024,
  • [2] An Improved U-Net Architecture for Image Dehazing
    Ge, Wenyi
    Lin, Yi
    Wang, Zhitao
    Wang, Guigui
    Tan, Shihan
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2021, E104D (12) : 2218 - 2225
  • [3] Image denoising method based on deep learning using improved U-net
    Han J.
    Choi J.
    Lee C.
    IEIE Transactions on Smart Processing and Computing, 2021, 10 (04) : 291 - 295
  • [4] Terahertz image super-resolution using an improved Attention U-net
    Li, Le
    Zou, Yan
    Wang, Bowen
    Zhang, Linfei
    Zhang, Yuzhen
    COMPUTATIONAL IMAGING VI, 2021, 11731
  • [5] Implementation of a Modified U-Net for Medical Image Segmentation on Edge Devices
    Ali, Owais
    Ali, Hazrat
    Shah, Syed Ayaz Ali
    Shahzad, Aamir
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (11) : 4593 - 4597
  • [6] Edge U-Net: Brain tumor segmentation using MRI based on deep U-Net model with boundary information
    Allah, Ahmed M. Gab
    Sarhan, Amany M.
    Elshennawy, Nada M.
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [7] Edge detection based-on U-net using edge classification CNN
    Choi K.-H.
    Ha J.-E.
    Journal of Institute of Control, Robotics and Systems, 2019, 25 (08) : 684 - 689
  • [8] An improved U-net based retinal vessel image segmentation method
    Ren, Kan
    Chang, Longdan
    Wan, Minjie
    Gu, Guohua
    Chen, Qian
    HELIYON, 2022, 8 (10)
  • [9] An Improved U-Net Convolutional Networks for Seabed Mineral Image Segmentation
    Song, Wei
    Zheng, Nan
    Liu, Xiangchun
    Qiu, Lirong
    Zheng, Rui
    IEEE ACCESS, 2019, 7 : 82744 - 82752
  • [10] Image Semantic Segmentation for Autonomous Driving Based on Improved U-Net
    Sun, Chuanlong
    Zhao, Hong
    Mu, Liang
    Xu, Fuliang
    Lu, Laiwei
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (01): : 787 - 801