Unsupervised framework for depth estimation and camera motion prediction from video

被引:13
作者
Yang, Delong [1 ]
Zhong, Xunyu [1 ]
Gu, Dongbing [2 ]
Peng, Xiafu [1 ]
Hu, Huosheng [2 ]
机构
[1] Xiamen Univ, Dept Automat, Xiamen 361005, Peoples R China
[2] Univ Essex, Sch Comp Sci & Elect Engn, Colchester CO4 3SQ, Essex, England
关键词
Unsupervised deep learning; Depth estimation; Camera motion prediction; Convolutional neural network; NETWORK;
D O I
10.1016/j.neucom.2019.12.049
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Depth estimation from monocular video plays a crucial role in scene perception. The significant drawback of supervised learning models is the need for vast amounts of manually labeled data (ground truth) for training. To overcome this limitation, unsupervised learning strategies without the requirement for ground truth have achieved extensive attention from researchers in the past few years. This paper presents a novel unsupervised framework for estimating single-view depth and predicting camera motion jointly. Stereo image sequences are used to train the model while monocular images are required for inference. The presented framework is composed of two CNNs (depth CNN and pose CNN) which are trained concurrently and tested independently. The objective function is constructed on the basis of the epipolar geometry constraints between stereo image sequences. To improve the accuracy of the model, a left-right consistency loss is added to the objective function. The use of stereo image sequences enables us to utilize both spatial information between stereo images and temporal photometric warp error from image sequences. Experimental results on the KITTI and Cityscapes datasets show that our model not only outperforms prior unsupervised approaches but also achieving better results comparable with several supervised methods. Moreover, we also train our model on the Euroc dataset which is captured in an indoor environment. Experiments in indoor and outdoor scenes are conducted to test the generalization capability of the model. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:169 / 185
页数:17
相关论文
共 49 条
[1]  
[Anonymous], 2012, 2012 IEEE COMP SOC C
[2]  
[Anonymous], P INT CINT ROB SYST
[3]   Online Representation Learning with Single and Multi-layer Hebbian Networks for Image Classification [J].
Bahroun, Yanis ;
Soltoggio, Andrea .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2017, PT I, 2017, 10613 :354-363
[4]   DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving [J].
Chen, Chenyi ;
Seff, Ari ;
Kornhauser, Alain ;
Xiao, Jianxiong .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2722-2730
[5]  
Chen Yuntao, 2019, CORR
[6]   The Cityscapes Dataset for Semantic Urban Scene Understanding [J].
Cordts, Marius ;
Omran, Mohamed ;
Ramos, Sebastian ;
Rehfeld, Timo ;
Enzweiler, Markus ;
Benenson, Rodrigo ;
Franke, Uwe ;
Roth, Stefan ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3213-3223
[7]  
Eigen D, 2014, ADV NEUR IN, V27
[8]   Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture [J].
Eigen, David ;
Fergus, Rob .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2650-2658
[9]  
Fraundorfer F, 2007, 2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9, P3878
[10]   Exploiting Symmetry and/or Manhattan Properties for 3D Object Structure Estimation from Single and Multiple Images [J].
Gao, Yuan ;
Yuille, Alan L. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6718-6727