Two-View Monocular Depth Estimation by Optic-Flow-Weighted Fusion

被引:2
|
作者
Kaneko, Alex Masuo [1 ,2 ]
Yamamoto, Kenjiro [1 ,2 ]
机构
[1] Hitachi Ltd, Ctr Technol Innovat Mech Engn, Robot Res Dept, Hitachinaka, Ibaraki 3120034, Japan
[2] Hitachi Ltd, Res & Dev Grp, Hitachinaka, Ibaraki 3120034, Japan
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2019年 / 4卷 / 02期
关键词
Computer vision for automation; visual-based navigation; monocular depth estimation; flat surface model; low optic flow;
D O I
10.1109/LRA.2019.2893426
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Depth estimation with monocular cameras is a cheap and promising solution for autonomous vehicles and robots. Even though there are many approaches in the literature, the issue of estimating depth of objects with low optic flow (low parallax) still remains. This work proposes a new two-view monocular depth estimation method that estimates depths with only a monocular camera using two optic flow directions based on the Flat Surface Model, fusing them with optic flow as weights. The proposed method achieves an average depth estimation error of 3.68 m and a maximum error of 107.34 m, which are smaller than those obtained by traditional techniques (22.90 and 9815.44 m, respectively).
引用
收藏
页码:830 / 837
页数:8
相关论文
共 35 条
  • [21] Attention based multilayer feature fusion convolutional neural network for unsupervised monocular depth estimation
    Lei, Zeyu
    Wang, Yan
    Li, Zijian
    Yang, Junyao
    NEUROCOMPUTING, 2021, 423 : 343 - 352
  • [22] Integrating convolutional guidance and Transformer fusion with Markov Random Fields smoothing for monocular depth estimation
    Peng, Xiaorui
    Meng, Yu
    Shi, Boqiang
    Zheng, Chao
    Wang, Meijun
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 143
  • [23] Light-weight Monocular Depth Estimation Via Cross Attention Fusion of Sparse LiDAR
    Rim, Hyun-Woo
    Kwak, Dae-Won
    Kim, Beom-Joon
    Kim, Jin-Yeob
    Kim, Dong-Han
    Journal of Institute of Control, Robotics and Systems, 2024, 30 (08) : 828 - 833
  • [24] Multilevel feature fusion and edge optimization network for self-supervised monocular depth estimation
    Liu, Guohua
    Niu, Shuqing
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (03)
  • [25] V2Depth: Monocular Depth Estimation via Feature-Level Virtual-View Simulation and Refinement
    Wu, Zizhang
    Li, Zhuozheng
    Fan, Zhi-Gang
    Wu, Yunzhe
    Pu, Jian
    Li, Xianzhi
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 688 - 697
  • [26] A Contour-Aware Monocular Depth Estimation Network Using Swin Transformer and Cascaded Multiscale Fusion
    Li, Tao
    Zhang, Yi
    IEEE SENSORS JOURNAL, 2024, 24 (08) : 13620 - 13628
  • [27] Self-supervised monocular depth and ego-motion estimation for CT-bronchoscopy fusion
    Chang, Qi
    Higgins, William E.
    IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING, MEDICAL IMAGING 2024, 2024, 12928
  • [28] CD-UDepth: Complementary dual-source information fusion for underwater monocular depth estimation
    Guo, Jiawei
    Ma, Jieming
    Sun, Feiyang
    Gao, Zhiqiang
    Garcia-Fernandez, angel F.
    Liang, Hai-Ning
    Zhu, Xiaohui
    Ding, Weiping
    INFORMATION FUSION, 2025, 118
  • [29] Smart lighting control system based on fusion of monocular depth estimation and multi-object detection
    Shen, Dongdong
    Ning, Chenguang
    Wang, Yingjie
    Duan, Wenjun
    Duan, Peiyong
    ENERGY AND BUILDINGS, 2022, 277
  • [30] IFDepth: Iterative fusion network for multi-frame self-supervised monocular depth estimation
    Wang, Lizhe
    Liang, Qi
    Che, Yu
    Wang, Lanmei
    Wang, Guibao
    KNOWLEDGE-BASED SYSTEMS, 2025, 318