Masked GAN for Unsupervised Depth and Pose Prediction With Scale Consistency

被引:44
作者
Zhao, Chaoqiang [1 ]
Yen, Gary G. [2 ]
Sun, Qiyu [1 ]
Zhang, Chongzhen [1 ]
Tang, Yang [1 ]
机构
[1] East China Univ Sci & Technol, Key Lab Adv Control & Optimizat Chem Proc, Minist Educ, Shanghai 200237, Peoples R China
[2] Oklahoma State Univ, Sch Elect & Comp Engn, Stillwater, OK 74075 USA
基金
中国国家自然科学基金;
关键词
Estimation; Image reconstruction; Training; Visualization; Generative adversarial networks; Videos; Generators; Adversarial learning; depth estimation; generative adversarial network (GAN); scale consistency; unsupervised learning; visual odometry (VO);
D O I
10.1109/TNNLS.2020.3044181
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous work has shown that adversarial learning can be used for unsupervised monocular depth and visual odometry (VO) estimation, in which the adversarial loss and the geometric image reconstruction loss are utilized as the mainly supervisory signals to train the whole unsupervised framework. However, the performance of the adversarial framework and image reconstruction is usually limited by occlusions and the visual field changes between the frames. This article proposes a masked generative adversarial network (GAN) for unsupervised monocular depth and ego-motion estimations. The MaskNet and Boolean mask scheme are designed in this framework to eliminate the effects of occlusions and impacts of visual field changes on the reconstruction loss and adversarial loss, respectively. Furthermore, we also consider the scale consistency of our pose network by utilizing a new scale-consistency loss, and therefore, our pose network is capable of providing the full camera trajectory over a long monocular sequence. Extensive experiments on the KITTI data set show that each component proposed in this article contributes to the performance, and both our depth and trajectory predictions achieve competitive performance on the KITTI and Make3D data sets.
引用
收藏
页码:5392 / 5403
页数:12
相关论文
共 44 条
[1]  
Abadi M., 2015, TENSORFLOW LARGE SCA
[2]  
Almalioglu Y, 2019, IEEE INT CONF ROBOT, P5474, DOI [10.1109/ICRA.2019.8793512, 10.1109/icra.2019.8793512]
[3]  
Bian JW, 2019, ADV NEUR IN, V32
[4]   Selective Sensor Fusion for Neural Visual-Inertial Odometry [J].
Chen, Changhao ;
Rosa, Stefano ;
Miao, Yishu ;
Lu, Chris Xiaoxuan ;
Wu, Wei ;
Markham, Andrew ;
Trigoni, Niki .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10534-10543
[5]   Towards Scene Understanding: Unsupervised Monocular Depth Estimation with Semantic-aware Representation [J].
Chen, Po-Yi ;
Liu, Alexander H. ;
Liu, Yen-Cheng ;
Wang, Yu-Chiang Frank .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2619-2627
[6]   The Cityscapes Dataset for Semantic Urban Scene Understanding [J].
Cordts, Marius ;
Omran, Mohamed ;
Ramos, Sebastian ;
Rehfeld, Timo ;
Enzweiler, Markus ;
Benenson, Rodrigo ;
Franke, Uwe ;
Roth, Stefan ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3213-3223
[7]  
Eigen D, 2014, ADV NEUR IN, V27
[8]   Direct Sparse Odometry [J].
Engel, Jakob ;
Koltun, Vladlen ;
Cremers, Daniel .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :611-625
[9]   SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks [J].
Feng, Tuo ;
Gu, Dongbing .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04) :4431-4437
[10]   Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue [J].
Garg, Ravi ;
VijayKumar, B. G. ;
Carneiro, Gustavo ;
Reid, Ian .
COMPUTER VISION - ECCV 2016, PT VIII, 2016, 9912 :740-756