Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints

被引:558
作者
Mahjourian, Reza [1 ,2 ]
Wicke, Martin [2 ]
Angelova, Anelia [2 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] Google Brain, Mountain View, CA 94043 USA
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
关键词
D O I
10.1109/CVPR.2018.00594
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the whole scene, and enforce consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.(1)
引用
收藏
页码:5667 / 5675
页数:9
相关论文
共 32 条
[1]  
Abadi M., 2016, TENSORFLOW LARGESCAL
[2]  
[Anonymous], 2016, ARXIV160206697
[3]  
[Anonymous], 2016, CVPR
[4]  
[Anonymous], T IMAGE PROCESSING
[5]  
Besl PJ, 1992, IEEE Transactions on Pattern Analysis and Machine Intelligence, V14, P239, DOI [DOI 10.1109/34.121791, 10.1109/34.121791]
[6]  
CHEN Y, 1991, 1991 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, P2724, DOI 10.1109/ROBOT.1991.132043
[7]   The Cityscapes Dataset for Semantic Urban Scene Understanding [J].
Cordts, Marius ;
Omran, Mohamed ;
Ramos, Sebastian ;
Rehfeld, Timo ;
Enzweiler, Markus ;
Benenson, Rodrigo ;
Franke, Uwe ;
Roth, Stefan ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3213-3223
[8]  
Eigen C., 2014, ADV NEURAL INF PROCE, V27, P2366, DOI DOI 10.5555/2969033.2969091
[9]   DeepStereo: Learning to Predict New Views from the World's Imagery [J].
Flynn, John ;
Neulander, Ivan ;
Philbin, James ;
Snavely, Noah .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :5515-5524
[10]   Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue [J].
Garg, Ravi ;
VijayKumar, B. G. ;
Carneiro, Gustavo ;
Reid, Ian .
COMPUTER VISION - ECCV 2016, PT VIII, 2016, 9912 :740-756