3-D Object Detection for Multiframe 4-D Automotive Millimeter-Wave Radar Point Cloud

被引:44
作者
Tan, Bin [1 ]
Ma, Zhixiong [1 ]
Zhu, Xichan [1 ]
Li, Sen [1 ]
Zheng, Lianqing [1 ]
Chen, Sihan [1 ]
Huang, Libo [2 ]
Bai, Jie [2 ]
机构
[1] Tongji Univ, Sch Automot Studies, Shanghai 201804, Peoples R China
[2] Zhejiang Univ City Coll, Sch Informat & Elect, Hangzhou 310015, Zhejiang, Peoples R China
关键词
Millimeter wave radar; Radar; Three-dimensional displays; Laser radar; Point cloud compression; Sensors; Radar detection; Autonomous driving; deep learning; millimeter-wave radar; object detection; point clouds; EGO-MOTION ESTIMATION; 3D;
D O I
10.1109/JSEN.2022.3219643
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Object detection is a crucial task in autonomous driving. Currently, object-detection methods for autonomous driving systems are primarily based on information from cameras and light detection and ranging (LiDAR), which may experience interference from complex lighting or poor weather. At present, the 4-D (x, y, z, v) millimeter-wave radar can provide a denser point cloud to achieve 3-D object-detection tasks that are difficult to complete with traditional millimeter wave radar. Existing 3-D object point-cloud-detection algorithms are mostly based on 3-D LiDAR; these methods are not necessarily applicable to millimeter-wave radars, which have sparser data and more noise and include velocity information. This study proposes a 3-D object-detection framework based on a multiframe 4-D millimeter-wave radar point cloud. First, the ego vehicle velocity information is estimated by the millimeter-wave radar, and the relative velocity information of the millimeter-wave radar point cloud is compensated for the absolute velocity. Second, by matching between millimeter-wave radar frames, the multiframe millimeter-wave radar point cloud is matched to the last frame. Finally, the object is detected by the proposed multiframe millimeter-wave radar point-cloud-detection network. Experiments are performed using our newly recorded TJ4DRadSet dataset in a complex traffic environment. The results showed that the proposed object-detection framework outperformed the comparison methods based on the 3-D mean average precision. The experimental results and methods can be used as the baseline for other multiframe 4-D millimeter-wave radar-detection algorithms.
引用
收藏
页码:11125 / 11138
页数:14
相关论文
共 37 条
[31]   RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [J].
Yang, Bin ;
Guo, Runsheng ;
Liang, Ming ;
Casas, Sergio ;
Urtasun, Raquel .
COMPUTER VISION - ECCV 2020, PT XVIII, 2020, 12363 :496-512
[32]   3DSSD: Point-based 3D Single Stage Object Detector [J].
Yang, Zetong ;
Sun, Yanan ;
Liu, Shu ;
Jia, Jiaya .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11037-11045
[33]   STD: Sparse-to-Dense 3D Object Detector for Point Cloud [J].
Yang, Zetong ;
Sun, Yanan ;
Liu, Shu ;
Shen, Xiaoyong ;
Jia, Jiaya .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :1951-1960
[34]   LiDAR-based Online 3D Video Object Detection with Graph-based Message Passing and Spatiotemporal Transformer Attention [J].
Yin, Junbo ;
Shen, Jianbing ;
Guan, Chenye ;
Zhou, Dingfu ;
Yang, Ruigang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11492-11501
[35]   Center-based 3D Object Detection and Tracking [J].
Yin, Tianwei ;
Zhou, Xingyi ;
Krahenbuhl, Philipp .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :11779-11788
[36]  
Zheng LQ, 2022, Arxiv, DOI arXiv:2204.13483
[37]   VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection [J].
Zhou, Yin ;
Tuzel, Oncel .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4490-4499