3D Hierarchical Refinement and Augmentation for Unsupervised Learning of Depth and Pose From Monocular Video

被引:16
作者
Wang, Guangming [1 ]
Zhong, Jiquan [1 ]
Zhao, Shijie [2 ]
Wu, Wenhua [1 ]
Liu, Zhe [3 ]
Wang, Hesheng [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai Engn Res Ctr Intelligent Control & Manage, Key Lab Syst Control & Informat Proc, Key Lab Marine Intelligent Equipment,Dept Automat,, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Dept Engn Mech, Shanghai 200240, Peoples R China
[3] Shanghai Jiao Tong Univ, AI Inst, MOE Key Lab Artificial Intelligence, Shanghai 200240, Peoples R China
关键词
Monocular depth estimation; visual odometry; unsupervised learning; pose refinement; 3D augmentation; VIEW SYNTHESIS; REMOVAL;
D O I
10.1109/TCSVT.2022.3215587
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Depth and ego-motion estimations are essential for the localization and navigation of autonomous robots and autonomous driving. Recent studies make it possible to learn the per-pixel depth and ego-motion from the unlabeled monocular video. In this paper, a novel unsupervised training framework is proposed with 3D hierarchical refinement and augmentation using explicit 3D geometry. In this framework, the depth and pose estimations are hierarchically and mutually coupled to refine the estimated pose layer by layer. The intermediate view image is proposed and synthesized by warping the pixels in an image with the estimated depth and coarse pose. Then, the residual pose transformation can be estimated from the new view image and the image of the adjacent frame to refine the coarse pose. The iterative refinement is implemented in a differentiable manner in this paper, making the whole framework optimized uniformly. Meanwhile, a new image augmentation method is proposed for the pose estimation by synthesizing a new view image, which creatively augments the pose in 3D space but gets a new augmented 2D image. The experiments on dKITTI demonstrate that our depth estimation achieves state-of-the-art performance and even surpasses recent approaches that utilize other auxiliary tasks. Our visual odometry outperforms all recent unsupervised monocular learning-based methods and achieves competitive performance to the geometry-based method, ORB-SLAM2 with back-end optimization. The source codes will be released soon at: https://github.com/IRMVLab/HRANet.
引用
收藏
页码:1776 / 1786
页数:11
相关论文
共 64 条
  • [1] A Novel Depth-Based Virtual View Synthesis Method for Free Viewpoint Video
    Ahn, Ilkoo
    Kim, Changick
    [J]. IEEE TRANSACTIONS ON BROADCASTING, 2013, 59 (04) : 614 - 626
  • [2] Aleotti Filippo, 2021, IEEE C COMP VIS PATT, P15201
  • [3] Almalioglu Y, 2019, IEEE INT CONF ROBOT, P5474, DOI [10.1109/ICRA.2019.8793512, 10.1109/icra.2019.8793512]
  • [4] Bian JW, 2019, ADV NEUR IN, V32
  • [5] Fixing Defect of Photometric Loss for Self-Supervised Monocular Depth Estimation
    Chen, Shu
    Pu, Zhengdong
    Fan, Xiang
    Zou, Beiji
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (03) : 1328 - 1338
  • [6] Hole Filling Method for Depth Image Based Rendering Based on Boundary Decision
    Cho, Jea-Hyung
    Song, Wonseok
    Choi, Hyuk
    Kim, Taejeong
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (03) : 329 - 333
  • [7] Region filling and object removal by exemplar-based image inpainting
    Criminisi, A
    Pérez, P
    Toyama, K
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2004, 13 (09) : 1200 - 1212
  • [8] A Hierarchical Superpixel-Based Approach for DIBR View Synthesis
    de Oliveira, Adriano Q.
    da Silveira, Thiago L. T.
    Walter, Marcelo
    Jung, Claudio R.
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 (30) : 6408 - 6419
  • [9] An Artifact-Type Aware DIBR Method for View Synthesis
    de Oliveira, Adriano Q.
    Walter, Marcelo
    Jung, Claudio R.
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (11) : 1705 - 1709
  • [10] Eigen D, 2014, ADV NEUR IN, V27