Illumination Insensitive Monocular Depth Estimation Based on Scene Object Attention and Depth Map Fusion

被引:0
|
作者
Wen, Jing [1 ,2 ]
Ma, Haojiang [1 ,2 ]
Yang, Jie [1 ,2 ]
Zhang, Songsong [1 ,2 ]
机构
[1] Shanxi Univ, Taiyuan, Peoples R China
[2] Minist Educ, Key Lab Comp Intelligence & Chinese Proc, Taiyuan, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X | 2024年 / 14434卷
关键词
Monocular depth estimation; Scene object attention; Weighted depth map fusion; Image enhancement; Illumination insensitivity;
D O I
10.1007/978-981-99-8549-4_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular depth estimation (MDE) is a crucial but challenging computer vision (CV) task which suffers from lighting sensitivity, blurring of neighboring depth edges, and object omissions. To address these problems, we propose an illumination insensitive monocular depth estimation method based on scene object attention and depth map fusion. Firstly, we design a low-light image selection algorithm, incorporated with the EnlightenGAN model, to improve the image quality of the training dataset and reduce the influence of lighting on depth estimation. Secondly, we develop a scene object attention mechanism (SOAM) to address the issue of incomplete depth information in natural scenes. Thirdly, we design a weighted depth map fusion (WDMF) module to fuse depth maps with various visual granularity and depth information, effectively resolving the problem of blurred depth map edges. Extensive experiments on the KITTI dataset demonstrate that our method effectively reduces the sensitivity of the depth estimation model to light and yields depth maps with more complete scene object contours.
引用
收藏
页码:358 / 370
页数:13
相关论文
共 50 条
  • [1] Monocular Depth Estimation Based on Multi-Scale Depth Map Fusion
    Yang, Xin
    Chang, Qingling
    Liu, Xinglin
    He, Siyuan
    Cui, Yan
    IEEE ACCESS, 2021, 9 : 67696 - 67705
  • [2] Radar Fusion Monocular Depth Estimation Based on Dual Attention
    Long, JianYu
    Huang, JinGui
    Wang, ShengChun
    ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT I, 2022, 13338 : 166 - 179
  • [3] Depth Map Decomposition for Monocular Depth Estimation
    Jun, Jinyoung
    Lee, Jae-Han
    Lee, Chul
    Kim, Chang-Su
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 18 - 34
  • [4] Transformer-based monocular depth estimation with hybrid attention fusion and progressive regression
    Liu, Peng
    Zhang, Zonghua
    Meng, Zhaozong
    Gao, Nan
    NEUROCOMPUTING, 2025, 620
  • [5] Lightweight monocular absolute depth estimation based on attention mechanism
    Jin, Jiayu
    Tao, Bo
    Qian, Xinbo
    Hu, Jiaxin
    Li, Gongfa
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [6] Attention based multilayer feature fusion convolutional neural network for unsupervised monocular depth estimation
    Lei, Zeyu
    Wang, Yan
    Li, Zijian
    Yang, Junyao
    NEUROCOMPUTING, 2021, 423 : 343 - 352
  • [7] Monocular Depth Estimation Based on Dilated Convolutions and Feature Fusion
    Li, Hang
    Liu, Shuai
    Wang, Bin
    Wu, Yuanhao
    APPLIED SCIENCES-BASEL, 2024, 14 (13):
  • [8] Monocular Dense Reconstruction by Depth Estimation Fusion
    Chen, Tian
    Ding, Wendong
    Zhang, Dapeng
    Liu, Xilong
    PROCEEDINGS OF THE 30TH CHINESE CONTROL AND DECISION CONFERENCE (2018 CCDC), 2018, : 4460 - 4465
  • [9] Self-Supervised Monocular Depth Estimation Based on Channel Attention
    Tao, Bo
    Chen, Xinbo
    Tong, Xiliang
    Jiang, Du
    Chen, Baojia
    PHOTONICS, 2022, 9 (06)
  • [10] Smart lighting control system based on fusion of monocular depth estimation and multi-object detection
    Shen, Dongdong
    Ning, Chenguang
    Wang, Yingjie
    Duan, Wenjun
    Duan, Peiyong
    ENERGY AND BUILDINGS, 2022, 277