SMURF: Spatial Multi-Representation Fusion for 3D Object Detection With 4D Imaging Radar

被引:16
作者
Liu, Jianan [2 ]
Zhao, Qiuchi [1 ]
Xiong, Weiyi [1 ]
Huang, Tao [3 ]
Han, Qing-Long [4 ]
Zhu, Bing [1 ]
机构
[1] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100191, Peoples R China
[2] Vitalent Consulting, S-41761 Gothenburg, Sweden
[3] James Cook Univ, Coll Sci & Engn, Cairns, Qld 4878, Australia
[4] Swinburne Univ Technol, Sch Sci Comp & Engn Technol, Melbourne, Vic 3122, Australia
来源
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES | 2024年 / 9卷 / 01期
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
Radar; Radar imaging; Point cloud compression; Radar detection; Feature extraction; Three-dimensional displays; Object detection; 4D imaging radar; radar point cloud; kernel density estimation; multi-dimensional Gaussian mixture; 3D object detection; autonomous driving; MIMO RADAR; NETWORK; CNN;
D O I
10.1109/TIV.2023.3322729
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The 4D millimeter-Wave (mmWave) radar is a promising technology for vehicle sensing due to its cost-effectiveness and operability in adverse weather conditions. However, the adoption of this technology has been hindered by sparsity and noise issues in radar point cloud data. This article introduces spatial multi-representation fusion (SMURF), a novel approach to 3D object detection using a single 4D imaging radar. SMURF leverages multiple representations of radar detection points, including pillarization and density features of a multi-dimensional Gaussian mixture distribution through kernel density estimation (KDE). KDE effectively mitigates measurement inaccuracy caused by limited angular resolution and multi-path propagation of radar signals. Additionally, KDE helps alleviate point cloud sparsity by capturing density features. Experimental evaluations on View-of-Delft (VoD) and TJ4DRadSet datasets demonstrate the effectiveness and generalization ability of SMURF, outperforming recently proposed 4D imaging radar-based single-representation models. Moreover, while using 4D imaging radar only, SMURF still achieves comparable performance to the state-of-the-art 4D imaging radar and camera fusion-based method, with an increase of 1.22% in the mean average precision on bird's-eye view of TJ4DRadSet dataset and 1.32% in the 3D mean average precision on the entire annotated area of VoD dataset. Our proposed method demonstrates impressive inference time and addresses the challenges of real-time detection, with the inference time no more than 0.05 seconds for most scans on both datasets. This research highlights the benefits of 4D mmWave radar and is a strong benchmark for subsequent works regarding 3D object detection with 4D imaging radar. Index Terms-4D imaging radar, radar point cloud,
引用
收藏
页码:799 / 812
页数:14
相关论文
共 50 条
  • [31] STFNET: Sparse Temporal Fusion for 3D Object Detection in LiDAR Point Cloud
    Meng, Xin
    Zhou, Yuan
    Ma, Jun
    Jiang, Fangdi
    Qi, Yongze
    Wang, Cui
    Kim, Jonghyuk
    Wang, Shifeng
    [J]. IEEE SENSORS JOURNAL, 2025, 25 (03) : 5866 - 5877
  • [32] HCPVF: Hierarchical Cascaded Point-Voxel Fusion for 3D Object Detection
    Fan, Baojie
    Zhang, Kexin
    Tian, Jiandong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 8997 - 9009
  • [33] EPNet plus plus : Cascade Bi-Directional Fusion for Multi-Modal 3D Object Detection
    Liu, Zhe
    Huang, Tengteng
    Li, Bingling
    Chen, Xiwu
    Wang, Xi
    Bai, Xiang
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 8324 - 8341
  • [34] BiCo-Fusion: Bidirectional Complementary LiDAR-Camera Fusion for Semantic- and Spatial-Aware 3D Object Detection
    Song, Yang
    Wang, Lin
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (02): : 1457 - 1464
  • [35] LiDAR-Based All-Weather 3D Object Detection via Prompting and Distilling 4D Radar
    Chae, Yujeong
    Kim, Hyeonseong
    Oh, Changgyoon
    Kim, Minseok
    Yoon, Kuk-Jin
    [J]. COMPUTER VISION - ECCV 2024, PT LVI, 2025, 15114 : 368 - 385
  • [36] RI-Fusion: 3D Object Detection Using Enhanced Point Features With Range-Image Fusion for Autonomous Driving
    Zhang, Xinyu
    Wang, Li
    Zhang, Guoxin
    Lan, Tianwei
    Zhang, Haoming
    Zhao, Lijun
    Li, Jun
    Zhu, Lei
    Liu, Huaping
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [37] MVFAN: Multi-view Feature Assisted Network for 4D Radar Object Detection
    Yan, Qiao
    Wang, Yihan
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 493 - 511
  • [38] 3D Vehicle Detection Using Multi-Level Fusion From Point Clouds and Images
    Zhao, Kun
    Ma, Lingfei
    Meng, Yu
    Liu, Li
    Wang, Junbo
    Marcato, Jose, Jr.
    Goncalves, Wesley Nunes
    Li, Jonathan
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) : 15146 - 15154
  • [39] Radar Transformer: An Object Classification Network Based on 4D MMW Imaging Radar
    Bai, Jie
    Zheng, Lianqing
    Li, Sen
    Tan, Bin
    Chen, Sihan
    Huang, Libo
    [J]. SENSORS, 2021, 21 (11)
  • [40] Multi-Modal 3D Object Detection by Box Matching
    Liu, Zhe
    Ye, Xiaoqing
    Zou, Zhikang
    He, Xinwei
    Tan, Xiao
    Ding, Errui
    Wang, Jingdong
    Bai, Xiang
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, : 19917 - 19928