Unsupervised Learning of Depth and Camera Pose with Feature Map Warping

被引:2
作者
Guo, Ente [1 ]
Chen, Zhifeng [1 ]
Zhou, Yanlin [2 ]
Wu, Dapeng Oliver [2 ]
机构
[1] Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China
[2] Univ Florida, Dept Elect & Comp Engn, Gainesville, FL 32611 USA
基金
中国国家自然科学基金;
关键词
monocular depth estimation; single camera egomotion; occlusion-aware mask network; feature pyramid matching loss;
D O I
10.3390/s21030923
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Estimating the depth of image and egomotion of agent are important for autonomous and robot in understanding the surrounding environment and avoiding collision. Most existing unsupervised methods estimate depth and camera egomotion by minimizing photometric error between adjacent frames. However, the photometric consistency sometimes does not meet the real situation, such as brightness change, moving objects and occlusion. To reduce the influence of brightness change, we propose a feature pyramid matching loss (FPML) which captures the trainable feature error between a current and the adjacent frames and therefore it is more robust than photometric error. In addition, we propose the occlusion-aware mask (OAM) network which can indicate occlusion according to change of masks to improve estimation accuracy of depth and camera pose. The experimental results verify that the proposed unsupervised approach is highly competitive against the state-of-the-art methods, both qualitatively and quantitatively. Specifically, our method reduces absolute relative error (Abs Rel) by 0.017-0.088.
引用
收藏
页码:1 / 15
页数:15
相关论文
empty
未找到相关数据