Unsupervised Monocular Depth and Camera Pose Estimation with Multiple Masks and Geometric Consistency Constraints

被引:1
作者
Zhang, Xudong [1 ]
Zhao, Baigan [2 ]
Yao, Jiannan [2 ]
Wu, Guoqing [1 ]
机构
[1] Nantong Univ, Sch Informat Sci & Technol, Nantong 226019, Peoples R China
[2] Nantong Univ, Sch Mech Engn, Nantong 226019, Peoples R China
基金
中国国家自然科学基金;
关键词
depth estimation; camera pose; visual odometry; unsupervised learning; VISUAL ODOMETRY;
D O I
10.3390/s23115329
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
This paper presents a novel unsupervised learning framework for estimating scene depth and camera pose from video sequences, fundamental to many high-level tasks such as 3D reconstruction, visual navigation, and augmented reality. Although existing unsupervised methods have achieved promising results, their performance suffers in challenging scenes such as those with dynamic objects and occluded regions. As a result, multiple mask technologies and geometric consistency constraints are adopted in this research to mitigate their negative impacts. Firstly, multiple mask technologies are used to identify numerous outliers in the scene, which are excluded from the loss computation. In addition, the identified outliers are employed as a supervised signal to train a mask estimation network. The estimated mask is then utilized to preprocess the input to the pose estimation network, mitigating the potential adverse effects of challenging scenes on pose estimation. Furthermore, we propose geometric consistency constraints to reduce the sensitivity of illumination changes, which act as additional supervised signals to train the network. Experimental results on the KITTI dataset demonstrate that our proposed strategies can effectively enhance the model's performance, outperforming other unsupervised methods.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Unsupervised Monocular Depth Estimation With Channel and Spatial Attention
    Wang, Zhuping
    Dai, Xinke
    Guo, Zhanyu
    Huang, Chao
    Zhang, Hao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7860 - 7870
  • [22] Unsupervised Monocular Depth Estimation by Fusing Dilated Convolutional Network and SLAM
    Dai Renyue
    Fang Zhijun
    Gao Yongbin
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (06)
  • [23] Unsupervised monocular depth estimation based on edge enhancement
    Qu Y.
    Chen Y.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2024, 46 (01): : 71 - 79
  • [24] MOTION RECTIFICATION NETWORK FOR UNSUPERVISED LEARNING OF MONOCULAR DEPTH AND CAMERA MOTION
    Liu, Hong
    Hua, Guoliang
    Huang, Weibo
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2805 - 2809
  • [25] Depth Estimation of a Deformable Object via a Monocular Camera
    Jiang, Guolai
    Jin, Shaokun
    Ou, Yongsheng
    Zhou, Shoujun
    APPLIED SCIENCES-BASEL, 2019, 9 (07):
  • [26] Unsupervised Learning of Depth and Pose Estimation based on Continuous Frame Window
    Shang, Suning
    Wang, Huaimin
    Zhang, Pengfei
    Ding, Bo
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [27] Spike Transformer: Monocular Depth Estimation for Spiking Camera
    Zhang, Jiyuan
    Tang, Lulu
    Yu, Zhaofei
    Lu, Jiwen
    Huang, Tiejun
    COMPUTER VISION, ECCV 2022, PT VII, 2022, 13667 : 34 - 52
  • [28] MuDeepNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose Using Multi-view Consistency Loss
    Zhang, Jun-Ning
    Su, Qun-Xing
    Liu, Peng-Yuan
    Ge, Hong-Yu
    Zhang, Ze-Feng
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2019, 17 (10) : 2586 - 2596
  • [29] UnLearnerMC: Unsupervised learning of dense depth and camera pose using mask and cooperative loss
    Zhang, Junning
    Su, Qunxing
    Liu, Pengyuan
    Xu, Chao
    Wang, Zhengjun
    KNOWLEDGE-BASED SYSTEMS, 2020, 192 (192)
  • [30] MuDeepNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose Using Multi-view Consistency Loss
    Jun-Ning Zhang
    Qun-Xing Su
    Peng-Yuan Liu
    Hong-Yu Ge
    Ze-Feng Zhang
    International Journal of Control, Automation and Systems, 2019, 17 : 2586 - 2596