Adaptive Context-Aware Multi-Modal Network for Depth Completion

被引:111
作者
Zhao, Shanshan [1 ]
Gong, Mingming [2 ]
Fu, Huan [3 ]
Tao, Dacheng [1 ]
机构
[1] Univ Sydney, Sch Comp Sci, Fac Engn, Darlington, NSW 2008, Australia
[2] Univ Melbourne, Sch Math & Stat, Melbourne, Vic 3010, Australia
[3] Alibaba Grp, Beijing 100102, Peoples R China
基金
澳大利亚研究理事会;
关键词
Feature extraction; Convolution; Adaptation models; Laser radar; Context modeling; Three-dimensional displays; Logic gates; Depth completion; context-aware; attention mechanism; multi-modal; graph propagation;
D O I
10.1109/TIP.2021.3079821
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Depth completion aims to recover a dense depth map from the sparse depth data and the corresponding single RGB image. The observed pixels provide the significant guidance for the recovery of the unobserved pixels' depth. However, due to the sparsity of the depth data, the standard convolution operation, exploited by most of existing methods, is not effective to model the observed contexts with depth values. To address this issue, we propose to adopt the graph propagation to capture the observed spatial contexts. Specifically, we first construct multiple graphs at different scales from observed pixels. Since the graph structure varies from sample to sample, we then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively. Furthermore, considering the mutli-modality of input data, we exploit the graph propagation on the two modalities respectively to extract multi-modal representations. Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively. The proposed strategy preserves the original information for one modality and also absorbs complementary information from the other through learning the adaptive gating weights. Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks, i.e., KITTI and NYU-v2, and at the same time has fewer parameters than latest models. Our code is available at: https://github.com/sshan-zhao/ACMNet.
引用
收藏
页码:5264 / 5276
页数:13
相关论文
共 83 条
[1]  
[Anonymous], 2006, Advances in Neural Information Processing Systems, DOI [10.1109/TPAMI.2015.2505283a, DOI 10.1109/TPAMI.2015.2505283A]
[2]  
Atapour-Abarghouei A., 2019, P IEEE C COMP VIS PA, P3373
[3]   To complete or to estimate, that is the question: A Multi-Task Approach to Depth Completion and Monocular Depth Estimation [J].
Atapour-Abarghouei, Amir ;
Breckon, Toby P. .
2019 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2019), 2019, :183-193
[4]   Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer [J].
Atapour-Abarghouei, Amir ;
Breckon, Toby P. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :2800-2810
[5]   The Fast Bilateral Solver [J].
Barron, Jonathan T. ;
Poole, Ben .
COMPUTER VISION - ECCV 2016, PT III, 2016, 9907 :617-632
[6]   MUTAN: Multimodal Tucker Fusion for Visual Question Answering [J].
Ben-younes, Hedi ;
Cadene, Remi ;
Cord, Matthieu ;
Thome, Nicolas .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2631-2639
[7]  
Cao Yuanzhouhan, 2018, IEEE Trans Image Process, DOI 10.1109/TIP.2018.2877944
[8]  
Chen Y, 2019, IEEE I CONF COMP VIS, P10022, DOI [10.1109/iccv.2019.01012, 10.1109/ICCV.2019.01012]
[9]   Depth Estimation via Affinity Learned with Convolutional Spatial Propagation Network [J].
Cheng, Xinjing ;
Wang, Peng ;
Yang, Ruigang .
COMPUTER VISION - ECCV 2018, PT XVI, 2018, 11220 :108-125
[10]  
Cheng XJ, 2020, AAAI CONF ARTIF INTE, V34, P10615