DeCoTR: Enhancing Depth Completion with 2D and 3D Attentions

被引:0
作者
Shi, Yunxiao [1 ]
Singh, Manish Kumar [1 ]
Cai, Hong [1 ]
Porikli, Fatih [1 ]
机构
[1] Qualcomm AI Res, San Diego, CA 92121 USA
来源
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2024年
关键词
LEARNING DEPTH; NETWORK; VISION;
D O I
10.1109/CVPR52733.2024.01021
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we introduce a novel approach that harnesses both 2D and 3D attentions to enable highly accurate depth completion without requiring iterative spatial propagations. Specifically, we first enhance a baseline convolutional depth completion model by applying attention to 2D features in the bottleneck and skip connections. This effectively improves the performance of this simple network and sets it on par with the latest, complex transformer-based models. Leveraging the initial depths and features from this network, we uplift the 2D features to form a 3D point cloud and construct a 3D point transformer to process it, allowing the model to explicitly learn and exploit 3D geometric features. In addition, we propose normalization techniques to process the point cloud, which improves learning and leads to better accuracy than directly using point transformers off the shelf. Furthermore, we incorporate global attention on downsampled point cloud features, which enables long-range context while still being computationally feasible. We evaluate our method, DeCoTR, on established depth completion benchmarks, including NYU Depth V2 and KITTI, showcasing that it sets new state-of-the-art performance. We further conduct zero-shot evaluations on ScanNet and DDAD benchmarks and demonstrate that DeCoTR has superior generalizability compared to existing approaches.
引用
收藏
页码:10736 / 10746
页数:11
相关论文
共 51 条
[21]   Depth Completion with Twin Surface Extrapolation at Occlusion Boundaries [J].
Imran, Saif ;
Liu, Xiaoming ;
Morris, Daniel .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :2583-2592
[22]   CostDCNet: Cost Volume Based Depth Completion for a Single RGB-D Image [J].
Kam, Jaewon ;
Kim, Jungeon ;
Kim, Soongjin ;
Park, Jaesik ;
Lee, Seungyong .
COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 :257-274
[23]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001
[24]   In Defense of Classical Image Processing: Fast Depth Completion on the CPU [J].
Ku, Jason ;
Harakeh, Ali ;
Waslander, Steven L. .
2018 15TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2018, :16-22
[25]   Depth Completion using Plane-Residual Representation [J].
Lee, Byeong-Uk ;
Lee, Kyunghyun ;
Kweon, In So .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :13911-13920
[26]  
Lin YK, 2022, AAAI CONF ARTIF INTE, P1638
[27]  
Liu SF, 2017, ADV NEUR IN, V30
[28]   GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [J].
Liu, Xin ;
Shao, Xiaofei ;
Wang, Bo ;
Li, Yali ;
Wang, Shengjin .
COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 :90-107
[29]  
Ma FC, 2018, IEEE INT CONF ROBOT, P4796
[30]   SemAttNet: Toward Attention-Based Semantic Aware Guided Depth Completion [J].
Nazir, Danish ;
Pagani, Alain ;
Liwicki, Marcus ;
Stricker, Didier ;
Afzal, Muhammad Zeshan .
IEEE ACCESS, 2022, 10 :120781-120791