DDet3D: embracing 3D object detector with diffusion

被引:0
作者
Erabati, Gopi Krishna [1 ]
Araujo, Helder [1 ]
机构
[1] Univ Coimbra, Inst Syst & Robot, Rua Silvio Lima,Polo 2, P-3030290 Coimbra, Portugal
关键词
3D object detection; Diffusion; LiDAR; Autonomous driving; Computer vision;
D O I
10.1007/s10489-024-06045-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing approaches rely on heuristic or learnable object proposals (which are required to be optimised during training) for 3D object detection. In our approach, we replace the hand-crafted or learnable object proposals with randomly generated object proposals by formulating a new paradigm to employ a diffusion model to detect 3D objects from a set of randomly generated and supervised learning-based object proposals in an autonomous driving application. We propose DDet3D, a diffusion-based 3D object detection framework that formulates 3D object detection as a generative task over the 3D bounding box coordinates in 3D space. To our knowledge, this work is the first to formulate the 3D object detection with denoising diffusion model and to establish that 3D randomly generated and supervised learning-based proposals (different from empirical anchors or learnt queries) are also potential object candidates for 3D object detection. During training, the 3D random noisy boxes are employed from the 3D ground truth boxes by progressively adding Gaussian noise, and the DDet3D network is trained to reverse the diffusion process. During the inference stage, the DDet3D network is able to iteratively refine the 3D randomly generated and supervised learning-based noisy boxes to predict 3D bounding boxes conditioned on the LiDAR Bird's Eye View (BEV) features. The advantage of DDet3D is that it allows to decouple training and inference stages, thus enabling the use of a larger number of proposal boxes or sampling steps during inference to improve accuracy. We conduct extensive experiments and analysis on the nuScenes and KITTI datasets. DDet3D achieves competitive performance compared to well-designed 3D object detectors. Our work serves as a strong baseline to explore and employ more efficient diffusion models for 3D perception tasks.
引用
收藏
页数:16
相关论文
共 75 条
[41]   3D Object Detection with Pointformer [J].
Pan, Xuran ;
Xia, Zhuofan ;
Song, Shiji ;
Li, Li Erran ;
Huang, Gao .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :7459-7468
[42]  
Paszke A, 2019, ADV NEUR IN, V32
[43]  
Qi CR, 2017, ADV NEUR IN, V30
[44]   PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation [J].
Qi, Charles R. ;
Su, Hao ;
Mo, Kaichun ;
Guibas, Leonidas J. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :77-85
[45]   It's All Around You: Range-Guided Cylindrical Network for 3D Object Detection [J].
Rapoport-Lavie, Meytal ;
Raviv, Dan .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, :2992-3001
[46]  
Redmon J., 2018, arXiv, DOI DOI 10.48550/ARXIV.1804.02767
[47]   Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J].
Ren, Shaoqing ;
He, Kaiming ;
Girshick, Ross ;
Sun, Jian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) :1137-1149
[48]   High-Resolution Image Synthesis with Latent Diffusion Models [J].
Rombach, Robin ;
Blattmann, Andreas ;
Lorenz, Dominik ;
Esser, Patrick ;
Ommer, Bjoern .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :10674-10685
[49]   Image Super-Resolution via Iterative Refinement [J].
Saharia, Chitwan ;
Ho, Jonathan ;
Chan, William ;
Salimans, Tim ;
Fleet, David J. ;
Norouzi, Mohammad .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) :4713-4726
[50]   PV-RCNN plus plus : Point-Voxel Feature Set Abstraction With Local Vector Representation for 3D Object Detection [J].
Shi, Shaoshuai ;
Jiang, Li ;
Deng, Jiajun ;
Wang, Zhe ;
Guo, Chaoxu ;
Shi, Jianping ;
Wang, Xiaogang ;
Li, Hongsheng .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 131 (02) :531-551