Attention U-Net Oriented Towards 3D Depth Estimation

被引:0
|
作者
Ocsa Sanchez, Leonel Jaime [1 ]
Gutierrez Caceres, Juan Carlos [1 ]
机构
[1] Univ Catolica San Pablo, Arequipa, Peru
来源
INTELLIGENT COMPUTING, VOL 3, 2024 | 2024年 / 1018卷
关键词
3D reconstruction; Depth estimation; Indoor environments; Outdoor environments; Convolutional Neural Network (CNN); Loss functions; Attention mechanism; U-Net; STRUCTURE-FROM-MOTION;
D O I
10.1007/978-3-031-62269-4_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Advancements in 3D reconstruction, applied to depth estimation in both indoor and outdoor environments, have achieved significant performance in various applications. Typically, outdoor environment reconstruction has relied on traditional approaches such as Structure from Motion (SFM) and its variants. In contrast, indoor environment reconstruction has shifted towards the use of depth-sensing devices. However, these devices have exhibited limitations due to environmental factors, such as lighting conditions. Recent approaches and optimizations have led to the development of novel methods that leverage Convolutional Neural Networks (CNNs), irrespective of whether the environment is enclosed or open. These methods are capable of complementing both approaches. In light of these advancements, alternatives have emerged, including the integration of Attention Layers. These have seen substantial proposals that have evolved over recent years. Thus, this paper proposes a method for 3D depth estimation with a focus on indoor and outdoor images. The goal is to generate high-detail and precise depth maps of real-world scenes using a modified U-Net network with the inclusion of a custom attention mechanism.
引用
收藏
页码:466 / 483
页数:18
相关论文
共 50 条
  • [31] ASCU-Net: Attention Gate, Spatial and Channel Attention U-Net for Skin Lesion Segmentation
    Tong, Xiaozhong
    Wei, Junyu
    Sun, Bei
    Su, Shaojing
    Zuo, Zhen
    Wu, Peng
    DIAGNOSTICS, 2021, 11 (03)
  • [32] DSU-Net: Distraction-Sensitive U-Net for 3D lung tumor segmentation
    Zhao, Junting
    Dang, Meng
    Chen, Zhihao
    Wan, Liang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 109
  • [33] Research on a Multiscale U-Net Lung Nodule Segmentation Model Based on Edge Perception and 3D Attention Mechanism Improvement
    Ming, Hui
    Li, Yuqin
    Hu, Tianjiao
    Lan, Yihua
    IEEE ACCESS, 2024, 12 : 165458 - 165471
  • [34] Brain Tumor Segmentation from 3D MRI Scans Using U-Net
    Montaha S.
    Azam S.
    Rakibul Haque Rafid A.K.M.
    Hasan M.Z.
    Karim A.
    SN Computer Science, 4 (4)
  • [35] Automatic segmentation and quantification of the optic nerve on MRI using a 3D U-Net
    van Elst, Sabien
    de Bloeme, Christiaan M. M.
    Noteboom, Samantha
    de Jong, Marcus C. C.
    Moll, Annette C. C.
    Goericke, Sophia
    de Graaf, Pim
    Caan, Matthan W. A.
    JOURNAL OF MEDICAL IMAGING, 2023, 10 (03)
  • [36] Automated segmentation of computed tomography colonography images using a 3D U-Net
    Barr, Keiran
    Laframboise, Jacob
    Ungi, Tamas
    Hookey, Lawrence
    Fichtinger, Gabor
    MEDICAL IMAGING 2020: IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING, 2021, 11315
  • [37] S3AR U-Net: A separable squeezed similarity attention-gated residual U-Net for glottis segmentation
    Montalbo, Francis Jesmar P.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 92
  • [38] DAU-Net: a novel U-Net with dual attention for retinal vessel segmentation
    Jian, Muwei
    Xu, Wenjing
    Nie, Changqun
    Li, Shuo
    Yang, Songwen
    Li, Xiaoguang
    BIOMEDICAL PHYSICS & ENGINEERING EXPRESS, 2025, 11 (02):
  • [39] MAU-Net: Mixed attention U-Net for MRI brain tumor segmentation
    Zhang, Yuqing
    Han, Yutong
    Zhang, Jianxin
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (12) : 20510 - 20527
  • [40] A mixed attention-gated U-Net for continuous cuffless blood pressure estimation
    Zhong, Yiting
    Chen, Yongyi
    Zhang, Dan
    Xu, Yanghui
    Karimi, Hamid Reza
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (08) : 4143 - 4151