Illumination Insensitive Monocular Depth Estimation Based on Scene Object Attention and Depth Map Fusion

被引:0
|
作者
Wen, Jing [1 ,2 ]
Ma, Haojiang [1 ,2 ]
Yang, Jie [1 ,2 ]
Zhang, Songsong [1 ,2 ]
机构
[1] Shanxi Univ, Taiyuan, Peoples R China
[2] Minist Educ, Key Lab Comp Intelligence & Chinese Proc, Taiyuan, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X | 2024年 / 14434卷
关键词
Monocular depth estimation; Scene object attention; Weighted depth map fusion; Image enhancement; Illumination insensitivity;
D O I
10.1007/978-981-99-8549-4_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular depth estimation (MDE) is a crucial but challenging computer vision (CV) task which suffers from lighting sensitivity, blurring of neighboring depth edges, and object omissions. To address these problems, we propose an illumination insensitive monocular depth estimation method based on scene object attention and depth map fusion. Firstly, we design a low-light image selection algorithm, incorporated with the EnlightenGAN model, to improve the image quality of the training dataset and reduce the influence of lighting on depth estimation. Secondly, we develop a scene object attention mechanism (SOAM) to address the issue of incomplete depth information in natural scenes. Thirdly, we design a weighted depth map fusion (WDMF) module to fuse depth maps with various visual granularity and depth information, effectively resolving the problem of blurred depth map edges. Extensive experiments on the KITTI dataset demonstrate that our method effectively reduces the sensitivity of the depth estimation model to light and yields depth maps with more complete scene object contours.
引用
收藏
页码:358 / 370
页数:13
相关论文
共 50 条
  • [21] MONOCULAR SEGMENT-WISE DEPTH: MONOCULAR DEPTH ESTIMATION BASED ON A SEMANTIC SEGMENTATION PRIOR
    Atapour-Abarghouei, Amir
    Breckon, Toby P.
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 4295 - 4299
  • [22] BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and Monocular Depth Estimation
    Tang, Qi
    Cong, Runmin
    Sheng, Ronghui
    He, Lingzhi
    Zhang, Dan
    Zhao, Yao
    Kwong, Sam
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2148 - 2157
  • [23] Monocular depth estimation based on dense connections
    Wang, Quande
    Cheng, Kai
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2023, 51 (11): : 75 - 82
  • [24] CNNapsule: A Lightweight Network with Fusion Features for Monocular Depth Estimation
    Wang, Yinchu
    Zhu, Haijiang
    Liu, Mengze
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 507 - 518
  • [25] Transfer2Depth: Dual Attention Network With Transfer Learning for Monocular Depth Estimation
    Yeh, Chia-Hung
    Huang, Yao-Pao
    Lin, Chih-Yang
    Chang, Chuan-Yu
    IEEE ACCESS, 2020, 8 : 86081 - 86090
  • [26] Monocular depth estimation with multi-scale feature fusion
    Wang Q.
    Zhang S.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2020, 48 (05): : 7 - 12
  • [27] An Enhancement Method for Underwater Images under Natural Illumination Based on Scene Depth Estimation
    Wang D.
    Zhang Z.
    Zhao J.
    Liang W.
    Yang X.
    Fan H.
    Tang Y.
    Jiqiren/Robot, 2021, 43 (03): : 364 - 372
  • [28] LAM-Depth: Laplace-Attention Module-Based Self-Supervised Monocular Depth Estimation
    Wei, Jiansheng
    Pan, Shuguo
    Gao, Wang
    Guo, Peng
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 13706 - 13716
  • [29] Perceptual Monocular Depth Estimation
    Pan, Janice
    Bovik, Alan C.
    NEURAL PROCESSING LETTERS, 2021, 53 (02) : 1205 - 1228
  • [30] Perceptual Monocular Depth Estimation
    Janice Pan
    Alan C. Bovik
    Neural Processing Letters, 2021, 53 : 1205 - 1228