DDet3D: embracing 3D object detector with diffusion

被引:0
作者
Erabati, Gopi Krishna [1 ]
Araujo, Helder [1 ]
机构
[1] Univ Coimbra, Inst Syst & Robot, Rua Silvio Lima,Polo 2, P-3030290 Coimbra, Portugal
关键词
3D object detection; Diffusion; LiDAR; Autonomous driving; Computer vision;
D O I
10.1007/s10489-024-06045-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing approaches rely on heuristic or learnable object proposals (which are required to be optimised during training) for 3D object detection. In our approach, we replace the hand-crafted or learnable object proposals with randomly generated object proposals by formulating a new paradigm to employ a diffusion model to detect 3D objects from a set of randomly generated and supervised learning-based object proposals in an autonomous driving application. We propose DDet3D, a diffusion-based 3D object detection framework that formulates 3D object detection as a generative task over the 3D bounding box coordinates in 3D space. To our knowledge, this work is the first to formulate the 3D object detection with denoising diffusion model and to establish that 3D randomly generated and supervised learning-based proposals (different from empirical anchors or learnt queries) are also potential object candidates for 3D object detection. During training, the 3D random noisy boxes are employed from the 3D ground truth boxes by progressively adding Gaussian noise, and the DDet3D network is trained to reverse the diffusion process. During the inference stage, the DDet3D network is able to iteratively refine the 3D randomly generated and supervised learning-based noisy boxes to predict 3D bounding boxes conditioned on the LiDAR Bird's Eye View (BEV) features. The advantage of DDet3D is that it allows to decouple training and inference stages, thus enabling the use of a larger number of proposal boxes or sampling steps during inference to improve accuracy. We conduct extensive experiments and analysis on the nuScenes and KITTI datasets. DDet3D achieves competitive performance compared to well-designed 3D object detectors. Our work serves as a strong baseline to explore and employ more efficient diffusion models for 3D perception tasks.
引用
收藏
页数:16
相关论文
共 75 条
[21]   3D Semantic Segmentation with Submanifold Sparse Convolutional Networks [J].
Graham, Benjamin ;
Engelcke, Martin ;
van der Maaten, Laurens .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9224-9232
[22]   DIFFUSIONINST: DIFFUSION MODEL FOR INSTANCE SEGMENTATION [J].
Gu, Zhangxuan ;
Chen, Haoxing ;
Xu, Zhuoer .
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, :2730-2734
[23]  
Ho J., 2020, Advances in neural information processing systems, P6840
[24]  
Ho J, 2022, J MACH LEARN RES, V23, P1
[25]  
Huang CW, 2021, ADV NEUR IN, V34
[26]  
Kawar B, 2022, ADV NEUR IN
[27]  
Kong Z., 2021, INT C LEARN REPR
[28]  
Ku J, 2018, IEEE INT C INT ROBOT, P5750, DOI 10.1109/IROS.2018.8594049
[29]   The Hungarian Method for the assignment problem [J].
Kuhn, HW .
NAVAL RESEARCH LOGISTICS, 2005, 52 (01) :7-21
[30]   PointPillars: Fast Encoders for Object Detection from Point Clouds [J].
Lang, Alex H. ;
Vora, Sourabh ;
Caesar, Holger ;
Zhou, Lubing ;
Yang, Jiong ;
Beijbom, Oscar .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :12689-12697