Attention U-Net Oriented Towards 3D Depth Estimation

被引:0
|
作者
Ocsa Sanchez, Leonel Jaime [1 ]
Gutierrez Caceres, Juan Carlos [1 ]
机构
[1] Univ Catolica San Pablo, Arequipa, Peru
来源
INTELLIGENT COMPUTING, VOL 3, 2024 | 2024年 / 1018卷
关键词
3D reconstruction; Depth estimation; Indoor environments; Outdoor environments; Convolutional Neural Network (CNN); Loss functions; Attention mechanism; U-Net; STRUCTURE-FROM-MOTION;
D O I
10.1007/978-3-031-62269-4_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Advancements in 3D reconstruction, applied to depth estimation in both indoor and outdoor environments, have achieved significant performance in various applications. Typically, outdoor environment reconstruction has relied on traditional approaches such as Structure from Motion (SFM) and its variants. In contrast, indoor environment reconstruction has shifted towards the use of depth-sensing devices. However, these devices have exhibited limitations due to environmental factors, such as lighting conditions. Recent approaches and optimizations have led to the development of novel methods that leverage Convolutional Neural Networks (CNNs), irrespective of whether the environment is enclosed or open. These methods are capable of complementing both approaches. In light of these advancements, alternatives have emerged, including the integration of Attention Layers. These have seen substantial proposals that have evolved over recent years. Thus, this paper proposes a method for 3D depth estimation with a focus on indoor and outdoor images. The goal is to generate high-detail and precise depth maps of real-world scenes using a modified U-Net network with the inclusion of a custom attention mechanism.
引用
收藏
页码:466 / 483
页数:18
相关论文
共 50 条
  • [21] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Rajamani, Kumar T.
    Rani, Priya
    Siebert, Hanna
    ElagiriRamalingam, Rajkumar
    Heinrich, Mattias P.
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 981 - 989
  • [22] Multiscale Attention U-Net for Skin Lesion Segmentation
    Alahmadi, Mohammad D.
    IEEE ACCESS, 2022, 10 : 59145 - 59154
  • [23] U-Net with Attention Mechanism for Retinal Vessel Segmentation
    Si, Ze
    Fu, Dongmei
    Li, Jiahao
    IMAGE AND GRAPHICS, ICIG 2019, PT II, 2019, 11902 : 668 - 677
  • [24] DEU-Net: Dual Encoder U-Net for 3D Medical Image Segmentation
    Zhou, Yuxiang
    Kang, Xin
    Ren, Fuji
    Nakagawa, Satoshi
    Shan, Xiao
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 2735 - 2741
  • [25] BTIS-Net: Efficient 3D U-Net for Brain Tumor Image Segmentation
    Liu, Li
    Xia, Kaijian
    IEEE ACCESS, 2024, 12 : 133392 - 133405
  • [26] Monocular Depth Estimation of 2D Images Based on Optimized U-net with Transfer Learning
    Yeh, Ming-Tsung
    Chen, Tsung-Chi
    Pai, Neng-Sheng
    Cheng, Chi-Huan
    SENSORS AND MATERIALS, 2024, 36 (06) : 2569 - 2583
  • [27] Interactive 3D U-net for the segmentation of the pancreas in computed tomography scans
    Boers, T. G. W.
    Hu, Y.
    Gibson, E.
    Barratt, D. C.
    Bonmati, E.
    Krdzalic, J.
    van der Heijden, F.
    Hermans, J. J.
    Huisman, H. J.
    PHYSICS IN MEDICINE AND BIOLOGY, 2020, 65 (06)
  • [28] A residual U-Net network with image prior for 3D image denoising
    Abascal, J. E. P. J.
    Bussod, S.
    Ducros, N.
    Si-Mohamed, S.
    Douek, P.
    Chappard, C.
    Peyrin, F.
    28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 1264 - 1268
  • [29] Multi-level Glioma Segmentation using 3D U-Net Combined Attention Mechanism with Atrous Convolution
    Cheng, Jianhong
    Liu, Jin
    Liu, Liangliang
    Pan, Yi
    Wang, Jianxin
    2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2019, : 1031 - 1036
  • [30] U-net Segmentation of Lung Cancer CT Scans for 3D Rendering
    Ismail, Hanin Monir
    McKee, Gerard T.
    2024 5TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, ROBOTICS AND CONTROL, AIRC 2024, 2024, : 35 - 40