ClusterFusion: Leveraging Radar Spatial Features for Radar-Camera 3D Object Detection in Autonomous Vehicles

被引:2
作者
Kurniawan, Irfan Tito [1 ]
Trilaksono, Bambang Riyanto [1 ]
机构
[1] Inst Teknol Bandung, Sch Elect Engn & Informat, Bandung 40132, Indonesia
关键词
Feature extraction; Radar; Three-dimensional displays; Point cloud compression; Radar imaging; Cameras; Radar detection; Deep learning; monocular camera; fusion; 3D object detection; feature extraction; deep learning; FUSION;
D O I
10.1109/ACCESS.2023.3328953
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Thanks to the complementary nature of millimeter wave radar and camera, deep learning-based radar-camera 3D object detection methods may reliably produce accurate detections even in low-visibility conditions. This makes them preferable to use in autonomous vehicles' perception systems, especially as the combined cost of both sensors is cheaper than the cost of a lidar. Recent radar-camera methods commonly perform feature-level fusion which often involves projecting the radar points onto the same plane as the image features and fusing the extracted features from both modalities. While performing fusion on the image plane is generally simpler and faster, projecting radar points onto the image plane flattens the depth dimension of the point cloud which might lead to information loss and makes extracting the spatial features of the point cloud harder. We proposed ClusterFusion, an architecture that leverages the local spatial features of the radar point cloud by clustering the point cloud and performing feature extraction directly on the point cloud clusters before projecting the features onto the image plane. ClusterFusion achieved the state-of-the-art performance among all radar-monocular camera methods on the test slice of the nuScenes dataset with 48.7% nuScenes detection score (NDS). We also investigated the performance of different radar feature extraction strategies on point cloud clusters: a handcrafted strategy, a learning-based strategy, and a combination of both, and found that the handcrafted strategy yielded the best performance. The main goal of this work is to explore the use of radar's local spatial and point-wise features by extracting them directly from radar point cloud clusters for a radar-monocular camera 3D object detection method that performs cross-modal feature fusion on the image plane.
引用
收藏
页码:121511 / 121528
页数:18
相关论文
共 88 条
[1]  
[Anonymous], 2022, PMLR, DOI [10.48550/arXiv.2110.06922, DOI 10.48550/ARXIV.2110.06922]
[2]   TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers [J].
Bai, Xuyang ;
Hu, Zeyu ;
Zhu, Xinge ;
Huang, Qingqiu ;
Chen, Yilun ;
Fu, Hangbo ;
Tai, Chiew-Lan .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :1080-1089
[3]  
Caesar H, 2020, PROC CVPR IEEE, P11618, DOI 10.1109/CVPR42600.2020.01164
[4]  
Chadwick S, 2019, IEEE INT CONF ROBOT, P8311, DOI [10.1109/ICRA.2019.8794312, 10.1109/icra.2019.8794312]
[5]   Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor [J].
Chang, Shuo ;
Zhang, Yifan ;
Zhang, Fan ;
Zhao, Xiaotong ;
Huang, Sai ;
Feng, Zhiyong ;
Wei, Zhiqing .
SENSORS, 2020, 20 (04)
[6]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[7]  
Danzer A, 2019, IEEE INT C INTELL TR, P61, DOI 10.1109/ITSC.2019.8917000
[8]  
Deng JJ, 2021, AAAI CONF ARTIF INTE, V35, P1201
[9]   Radar-based 2D Car Detection Using Deep Neural Networks [J].
Dreher, Maria ;
Ercelik, Emec ;
Banziger, Timo ;
Knol, Alois .
2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
[10]  
Eigen D, 2014, ADV NEUR IN, V27