VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion

被引:103
作者
Li, Yiming [1 ]
Yu, Zhiding [2 ]
Choy, Christopher [2 ]
Xiao, Chaowei [2 ,3 ]
Alvarez, Jose M. [2 ,3 ]
Fidler, Sanja [2 ,3 ,4 ,5 ]
Feng, Chen [1 ]
Anandkumar, Anima [2 ,3 ,6 ]
机构
[1] NYU, New York, NY 10012 USA
[2] NVIDIA, Santa Clara, CA 95051 USA
[3] ASU, Tempe, AZ USA
[4] Univ Toronto, Toronto, ON, Canada
[5] Vector Inst, Toronto, ON, Canada
[6] CALTECH, Irvine, CA USA
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
SEGMENTATION;
D O I
10.1109/CVPR52729.2023.00877
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans can easily imagine the complete 3D geometry of occluded objects and scenes. This appealing ability is vital for recognition and understanding. To enable such capability in AI systems, we propose VoxFormer, a Transformer-based semantic scene completion framework that can output complete 3D volumetric semantics from only 2D images. Our framework adopts a two-stage design where we start from a sparse set of visible and occupied voxel queries from depth estimation, followed by a densification stage that generates dense 3D voxels from the sparse ones. A key idea of this design is that the visual features on 2D images correspond only to the visible scene structures rather than the occluded or empty spaces. Therefore, starting with the featurization and prediction of the visible structures is more reliable. Once we obtain the set of sparse queries, we apply a masked autoencoder design to propagate the information to all the voxels by self-attention. Experiments on SemanticKITTI show that VoxFormer outperforms the state of the art with a relative improvement of 20.0% in geometry and 18.1% in semantics and reduces GPU memory during training to less than 16GB. Our code is available on https://github.com/NVlabs/VoxFormer.
引用
收藏
页码:9087 / 9098
页数:12
相关论文
共 76 条
[1]   MonoScene: Monocular 3D Semantic Scene Completion [J].
Anh-Quan Cao ;
de Charette, Raoul .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :3981-3991
[2]  
[Anonymous], 2018, Advances in Neural Information Processing Systems
[3]  
[Anonymous], PROC CVPR IEEE
[4]   SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Stachniss, Cyrill ;
Gall, Juergen .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9296-9306
[5]   AdaBins: Depth Estimation Using Adaptive Bins [J].
Bhat, Shariq Farooq ;
Alhashim, Ibraheem ;
Wonka, Peter .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :4008-4017
[6]   Semantic Scene Completion via Integrating Instances and Scene in-the-Loop [J].
Cai, Yingjie ;
Chen, Xuesong ;
Zhang, Chao ;
Lin, Kwan-Yee ;
Wang, Xiaogang ;
Li, Hongsheng .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :324-333
[7]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[8]   3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure Prior [J].
Chen, Xiaokang ;
Lin, Kwan-Yee ;
Qian, Chen ;
Zeng, Gang ;
Li, Hongsheng .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :4192-4201
[9]  
Cheng B, 2021, ADV NEUR IN, V34
[10]  
Cheng R, 2020, PR MACH LEARN RES, V155, P2148