Lightweight monocular absolute depth estimation based on attention mechanism

被引:1
作者
Jin, Jiayu [1 ,2 ]
Tao, Bo [1 ]
Qian, Xinbo [2 ,3 ]
Hu, Jiaxin [3 ]
Li, Gongfa [4 ]
机构
[1] Wuhan Univ Sci & Technol, Key Lab Met Equipment & Control Technol, Minist Educ, Wuhan, Peoples R China
[2] Wuhan Univ Sci & Technol, Hubei Key Lab Mech Transmiss & Mfg Engn, Wuhan, Peoples R China
[3] Wuhan Univ Sci & Technol, Precis Mfg Inst, Wuhan, Peoples R China
[4] Wuhan Univ Sci & Technol, Res Ctr Biomimet Robot & Intelligent Measurement &, Wuhan, Peoples R China
关键词
lightweight network; deep learning; monocular depth estimation; channel attention; self-supervised;
D O I
10.1117/1.JEI.33.2.023010
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
To solve the problem of obtaining a higher accuracy at the expense of redundant models, we propose a network architecture. We utilize a lightweight network that retains the high-precision advantage of the transformer and effectively combines it with convolutional neural network. By greatly reducing the training parameters, this approach achieves high precision, making it well suited for deployment on edge devices. A detail highlight module (DHM) is added to effectively fuse information from multiple scales, making the depth of prediction more accurate and clearer. A dense geometric constraints module is introduced to recover accurate scale factors in autonomous driving without additional sensors. Experimental results demonstrate that our model improves the accuracy from 98.1% to 98.3% compared with Monodepth2, and the model parameters are reduced by about 80%.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Self-supervised coarse-to-fine monocular depth estimation using a lightweight attention module
    Yuanzhen Li
    Fei Luo
    Chunxia Xiao
    Computational Visual Media, 2022, 8 : 631 - 647
  • [22] Lightweight monocular depth estimation network for robotics using intercept block GhostNet
    Ardiyanto, Igi
    Al-Fahsi, Resha
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (01)
  • [23] Multi-level Feature Maps Attention for Monocular Depth Estimation
    Lee, Seunghoon
    Lee, Minhyeok
    Lee, Sangyoon
    2021 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-ASIA (ICCE-ASIA), 2021,
  • [24] CATNet: Convolutional attention and transformer for monocular depth estimation
    Tang, Shuai
    Lu, Tongwei
    Liu, Xuanxuan
    Zhou, Huabing
    Zhang, Yanduo
    PATTERN RECOGNITION, 2024, 145
  • [25] Illumination Insensitive Monocular Depth Estimation Based on Scene Object Attention and Depth Map Fusion
    Wen, Jing
    Ma, Haojiang
    Yang, Jie
    Zhang, Songsong
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X, 2024, 14434 : 358 - 370
  • [26] CNNapsule: A Lightweight Network with Fusion Features for Monocular Depth Estimation
    Wang, Yinchu
    Zhu, Haijiang
    Liu, Mengze
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 507 - 518
  • [27] LW-Net: A Lightweight Network for Monocular Depth Estimation
    Feng, Cheng
    Zhang, Congxuan
    Chen, Zhen
    Li, Ming
    Chen, Hao
    Fan, Bingbing
    IEEE ACCESS, 2020, 8 : 196287 - 196298
  • [28] LAM-Depth: Laplace-Attention Module-Based Self-Supervised Monocular Depth Estimation
    Wei, Jiansheng
    Pan, Shuguo
    Gao, Wang
    Guo, Peng
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 13706 - 13716
  • [29] Transfer2Depth: Dual Attention Network With Transfer Learning for Monocular Depth Estimation
    Yeh, Chia-Hung
    Huang, Yao-Pao
    Lin, Chih-Yang
    Chang, Chuan-Yu
    IEEE ACCESS, 2020, 8 : 86081 - 86090
  • [30] Deep Learning Based Monocular Depth Estimation: A Survey
    Jiang J.-J.
    Li Z.-Y.
    Liu X.-M.
    Jisuanji Xuebao/Chinese Journal of Computers, 2022, 45 (06): : 1276 - 1307