Learning Depth from Monocular Videos using Direct Methods

被引:398
作者
Wang, Chaoyang [1 ]
Miguel Buenaposada, Jose [1 ,2 ]
Zhu, Rui [1 ]
Lucey, Simon [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Univ Rey Juan Carlos, Mostoles, Spain
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
关键词
D O I
10.1109/CVPR.2018.00216
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ability to predict depth from a single image - using recent advances in CNNs - is of increasing interest to the vision community. Unsupervised strategies to learning are particularly appealing as they can utilize much larger and varied monocular video datasets during learning without the need for ground truth depth or stereo. In previous works, separate pose and depth CNN predictors had to be determined such that their joint outputs minimized the photometric error. Inspired by recent advances in direct visual odometry (DVO), we argue that the depth CNN predictor can be learned without a pose CNN predictor. Further, we demonstrate empirically that incorporation of a differentiable implementation of DVO, along with a novel depth normalization strategy - substantially improves performance over state of the art that use monocular videos for training.
引用
收藏
页码:2022 / 2030
页数:9
相关论文
共 31 条
  • [1] Alismail H., 2016, Revised Selected Papers, P324
  • [2] [Anonymous], ARXIV170506839
  • [3] [Anonymous], 2016, ARXIV160400990
  • [4] [Anonymous], P IEEE C COMP VIS PA
  • [5] [Anonymous], 2017, P IEEE C COMP VIS PA
  • [6] [Anonymous], 2016, P IEEE C COMP VIS PA
  • [7] [Anonymous], 2017, IEEE T PATTERN ANAL
  • [8] [Anonymous], 2016, ARXIV160702565
  • [9] [Anonymous], ARXIV170804398
  • [10] [Anonymous], P EUR C COMP VIS SEP