Light-weight Monocular Depth Estimation Via Cross Attention Fusion of Sparse LiDAR

被引:0
|
作者
Rim, Hyun-Woo [1 ]
Kwak, Dae-Won [2 ]
Kim, Beom-Joon [2 ]
Kim, Jin-Yeob [2 ]
Kim, Dong-Han [1 ]
机构
[1] Department of Electronics Engineering (AgeTech-Service Convergence Major), Kyung Hee University
[2] Department of Artificial Intelligence, Kyung Hee University
关键词
camera LiDAR fusion; deep-learning; monocular depth estimation; sparse LiDAR;
D O I
10.5302/J.ICROS.2024.24.0116
中图分类号
学科分类号
摘要
This article proposes a light-weight monocular depth estimation model applicable to mobile robots. Unlike autonomous vehicles, mobile robots face constraints in sensor and computing resources owing to considerations of a power efficient and lightweight design. Considering these constraints, we propose a model that estimates depth images from small camera images with minimal parameters and computational overhead. Additionally, to address the performance degradation that occurs during the model’s light-weighting process, we efficiently integrate sparse LiDAR point cloud through cross-attention mechanisms. This enables mobile robots to effectively acquire depth information about their surroundings. © ICROS 2024.
引用
收藏
页码:828 / 833
页数:5
相关论文
共 50 条
  • [21] CATNet: Convolutional attention and transformer for monocular depth estimation
    Tang, Shuai
    Lu, Tongwei
    Liu, Xuanxuan
    Zhou, Huabing
    Zhang, Yanduo
    PATTERN RECOGNITION, 2024, 145
  • [22] PCTDepth: Exploiting Parallel CNNs and Transformer via Dual Attention for Monocular Depth Estimation
    Xia, Chenxing
    Duan, Xiuzhen
    Gao, Xiuju
    Ge, Bin
    Li, Kuan-Ching
    Fang, Xianjin
    Zhang, Yan
    Yang, Ke
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [23] Attention Mechanism Used in Monocular Depth Estimation: An Overview
    Li, Yundong
    Wei, Xiaokun
    Fan, Hanlu
    APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [24] A light-weight, efficient, and general cross-modal image fusion network
    Fang, Aiqing
    Zhao, Xinbo
    Yang, Jiaqi
    Qin, Beibei
    Zhang, Yanning
    NEUROCOMPUTING, 2021, 463 : 198 - 211
  • [25] Monocular Human Depth Estimation Via Pose Estimation
    Jun, Jinyoung
    Lee, Jae-Han
    Lee, Chul
    Kim, Chang-Su
    IEEE ACCESS, 2021, 9 : 151444 - 151457
  • [26] Enhanced Parallel sparse-MLP for Monocular Depth Estimation of Autonomous UAV
    Park C.-H.
    Choi H.-D.
    Journal of Institute of Control, Robotics and Systems, 2023, 29 (11) : 928 - 935
  • [27] Transfer2Depth: Dual Attention Network With Transfer Learning for Monocular Depth Estimation
    Yeh, Chia-Hung
    Huang, Yao-Pao
    Lin, Chih-Yang
    Chang, Chuan-Yu
    IEEE ACCESS, 2020, 8 : 86081 - 86090
  • [28] Attention-Based Grasp Detection With Monocular Depth Estimation
    Xuan Tan, Phan
    Hoang, Dinh-Cuong
    Nguyen, Anh-Nhat
    Nguyen, Van-Thiep
    Vu, Van-Duc
    Nguyen, Thu-Uyen
    Hoang, Ngoc-Anh
    Phan, Khanh-Toan
    Tran, Duc-Thanh
    Vu, Duy-Quang
    Ngo, Phuc-Quan
    Duong, Quang-Tri
    Ho, Ngoc-Trung
    Tran, Cong-Trinh
    Duong, Van-Hiep
    Mai, Anh-Truong
    IEEE ACCESS, 2024, 12 : 65041 - 65057
  • [29] DAttNet: monocular depth estimation network based on attention mechanisms
    Astudillo, Armando
    Barrera, Alejandro
    Guindel, Carlos
    Al-Kaff, Abdulla
    Garcia, Fernando
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (07) : 3347 - 3356
  • [30] DAttNet: monocular depth estimation network based on attention mechanisms
    Armando Astudillo
    Alejandro Barrera
    Carlos Guindel
    Abdulla Al-Kaff
    Fernando García
    Neural Computing and Applications, 2024, 36 : 3347 - 3356