Joint-Confidence-Guided Multi-Task Learning for 3D Reconstruction and Understanding From Monocular Camera

被引:1
作者
Wang, Yufan [1 ,2 ]
Zhao, Qunfei [1 ,2 ]
Gan, Yangzhou [3 ,4 ]
Xia, Zeyang [3 ,4 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Automat, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Ningbo Artificial Intelligence Inst, Shanghai 200240, Peoples R China
[3] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
[4] Shenzhen Inst Adv Technol, CAS Key Lab Human Machine Intelligence Synergy Sys, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Monocular scene; multi-task learning; supervised learning; joint confidence; stochastic trust mechanism;
D O I
10.1109/TIP.2023.3240834
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
3D reconstruction and understanding from monocular camera is a key issue in computer vision. Recent learning-based approaches, especially multi-task learning, significantly achieve the performance of the related tasks. However a few works still have limitation in drawing loss-spatial-aware information. In this paper, we propose a novel Joint-confidence-guided network (JCNet) to simultaneously predict depth, semantic labels, surface normal, and joint confidence map for corresponding loss functions. In details, we design a Joint Confidence Fusion and Refinement (JCFR) module to achieve multi-task feature fusion in the unified independent space, which can also absorb the geometric-semantic structure feature in the joint confidence map. We use confidence-guided uncertainty generated by the joint confidence map to supervise the multi-task prediction across the spatial and channel dimensions. To alleviate the training attention imbalance among different loss functions or spatial regions, the Stochastic Trust Mechanism (STM) is designed to stochastically modify the elements of joint confidence map in the training phase. Finally, we design a calibrating operation to alternately optimize the joint confidence branch and the other parts of JCNet to avoid overfiting. The proposed methods achieve state-of-the-art performance in both geometric-semantic prediction and uncertainty estimation on NYU-Depth V2 and Cityscapes.
引用
收藏
页码:1120 / 1133
页数:14
相关论文
共 50 条
  • [1] Alhashim I, 2019, Arxiv, DOI arXiv:1812.11941
  • [2] Marr Revisited: 2D-3D Alignment via Surface Normal Prediction
    Bansal, Aayush
    Russell, Bryan
    Gupta, Abhinav
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5965 - 5974
  • [3] CPMC: Automatic Object Segmentation Using Constrained Parametric Min-Cuts
    Carreira, Joao
    Sminchisescu, Cristian
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (07) : 1312 - 1328
  • [4] Chen LC, 2017, Arxiv, DOI arXiv:1706.05587
  • [5] DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
    Chen, Liang-Chieh
    Papandreou, George
    Kokkinos, Iasonas
    Murphy, Kevin
    Yuille, Alan L.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) : 834 - 848
  • [6] Chen XT, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P694
  • [7] Chen Z, 2018, PR MACH LEARN RES, V80
  • [8] A stereo confidence metric using single view imagery with comparison to five alternative approaches
    Egnal, G
    Mintz, M
    Wildes, RP
    [J]. IMAGE AND VISION COMPUTING, 2004, 22 (12) : 943 - 957
  • [9] Eigen D, 2014, ADV NEUR IN, V27
  • [10] Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture
    Eigen, David
    Fergus, Rob
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2650 - 2658