SfMLearner plus plus : Learning Monocular Depth & Ego-Motion using Meaningful Geometric Constraints

被引:14
|
作者
Prasad, Vignesh [1 ]
Bhowmick, Brojeshwar [1 ]
机构
[1] TCS Res & Innovat, Embedded Syst & Robot, Kolkata, India
关键词
D O I
10.1109/WACV.2019.00226
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Most geometric approaches to monocular Visual Odometry (VO) provide robust pose estimates, but sparse or semi-dense depth estimates. Off late, deep methods have shown good performance in generating dense depths and VO from monocular images by optimizing the photometric consistency between images. Despite being intuitive, a naive photometric loss does not ensure proper pixel correspondences between two views, which is the key factor for accurate depth and relative pose estimations. It is a well known fact that simply minimizing such an error is prone to failures. We propose a method using Epipolar constraints to make the learning more geometrically sound. We use the Essential matrix, obtained using Nister's Five Point Algorithm, for enforcing meaningful geometric constraints on the loss, rather than using it as labels for training. Our method, although simplistic but more geometrically meaningful, uses lesser number of parameters to give a comparable performance to state-of-the-art methods which use complex losses and large networks showing the effectiveness of using epipolar constraints. Such a geometrically constrained learning method performs successfully even in cases where simply minimizing the photometric error would fail.
引用
收藏
页码:2087 / 2096
页数:10
相关论文
共 50 条
  • [1] Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
    Mahjourian, Reza
    Wicke, Martin
    Angelova, Anelia
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5667 - 5675
  • [2] Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple Masks
    Wang, Guangming
    Wang, Hesheng
    Liu, Yiling
    Chen, Weidong
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 4724 - 4730
  • [3] Unsupervised Learning of Monocular Depth and Ego-Motion using Conditional PatchGANs
    Vankadari, Madhu
    Kumar, Swagat
    Majumder, Anima
    Das, Kaushik
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5677 - 5684
  • [4] UNSUPERVISED LEARNING OF DEPTH AND EGO-MOTION WITH SPATIAL-TEMPORAL GEOMETRIC CONSTRAINTS
    Wang, Anjie
    Gao, Yongbin
    Fang, Zhijun
    Jiang, Xiaoyan
    Wang, Shanshe
    Ma, Siwei
    Hwang, Jenq-Neng
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1798 - 1803
  • [5] Unsupervised Learning of Monocular Depth and Ego-Motion with Optical Flow Features and Multiple Constraints
    Zhao, Baigan
    Huang, Yingping
    Ci, Wenyan
    Hu, Xing
    SENSORS, 2022, 22 (04)
  • [6] Unsupervised monocular depth and ego-motion learning with structure and semantics
    Casser, Vincent
    Pirk, Soeren
    Mahjourian, Reza
    Angelova, Anelia
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 381 - 388
  • [7] Unsupervised Learning of Depth and Ego-Motion from Continuous Monocular Images
    Wang, Zhuo
    Huang, Min
    Huang, Xiao-Long
    Ma, Fei
    Dou, Jia-Ming
    Lyu, Jian-Li
    Journal of Computers (Taiwan), 2021, 32 (06) : 38 - 51
  • [8] Monocular Depth and Ego-motion Estimation with Scale Based on Superpixel and Normal Constraints
    Lu, Junxin
    Gao, Yongbin
    Chen, Jieyu
    Hwang, Jeng-Neng
    Fuji, Hamido
    Fang, Zhijun
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (10)
  • [9] Unsupervised Learning of Monocular Depth and Ego-Motion in Outdoor/Indoor Environments
    Gao, Ruipeng
    Xiao, Xuan
    Xing, Weiwei
    Li, Chi
    Liu, Lei
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17) : 16247 - 16258
  • [10] Depth Estimation with Ego-Motion Assisted Monocular Camera
    Mansour M.
    Davidson P.
    Stepanov O.
    Raunio J.-P.
    Aref M.M.
    Piché R.
    Gyroscopy Navig., 3 (111-123): : 111 - 123