A Novel Method for Improving Point Cloud Accuracy in Automotive Radar Object Recognition

被引:8
作者
Lu, Guowei [1 ]
He, Zhenhua [1 ]
Zhang, Shengkai [1 ]
Huang, Yanqing [2 ]
Zhong, Yi [1 ]
Li, Zhuo [3 ]
Han, Yi [1 ]
机构
[1] Wuhan Univ Technol, Sch Informat Engn, Wuhan 430070, Peoples R China
[2] SAIC GM Wuling Automobile Co Ltd, Bigdata Operat & Informat Technol Dept, Liuzhou 545007, Peoples R China
[3] SAIC GM Wuling Automobile Co Ltd, Plan & Operat Dept, Liuzhou 545007, Peoples R China
关键词
Automotive radar; point clouds; GAN; object recognition; NETWORK; CAMERA; CNN;
D O I
10.1109/ACCESS.2023.3280544
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
High-quality environmental perceptions are crucial for self-driving cars. Integrating multiple sensors is the predominant research direction for enhancing the accuracy and resilience of autonomous driving systems. Millimeter-wave radar has recently gained attention from the academic community owing to its unique physical properties that complement other sensing modalities, such as vision. Unlike cameras and LIDAR, millimeter-wave radar is not affected by light or weather conditions, has a high penetration capability, and can operate day and night, making it an ideal sensor for object tracking and identification. However, the longer wavelengths of millimeter-wave signals present challenges, including sparse point clouds and susceptibility to multipath effects, which limit their sensing accuracies. To enhance the object recognition capability of millimeter-wave radar, we propose a GAN-based point cloud enhancement method that converts sparse point clouds into RF images with richer semantic information, ultimately improving the accuracy of tasks such as object detection and semantic segmentation. We evaluated our method on the CARRADA and nuScenes datasets, and the experimental results demonstrate that our approach improves the object classification accuracy by 11.35% and semantic segmentation by 4.88% compared to current state-of-the-art methods.
引用
收藏
页码:78538 / 78548
页数:11
相关论文
共 34 条
[1]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[2]   Pointillism: Accurate 3D Bounding Box Estimation with Multi-Radars [J].
Bansal, Kshitiz ;
Rungta, Keshav ;
Zhu, Siyuan ;
Bharadia, Dinesh .
PROCEEDINGS OF THE 2020 THE 18TH ACM CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS, SENSYS 2020, 2020, :340-353
[3]  
Barnes D, 2020, IEEE INT CONF ROBOT, P6433, DOI [10.1109/ICRA40945.2020.9196884, 10.1109/icra40945.2020.9196884]
[4]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[5]   A Novel Radar Point Cloud Generation Method for Robot Environment Perception [J].
Cheng, Yuwei ;
Su, Jingran ;
Jiang, Mengxin ;
Liu, Yimin .
IEEE TRANSACTIONS ON ROBOTICS, 2022, 38 (06) :3754-3773
[6]   CFAR Feature Plane: A Novel Framework for the Analysis and Design of Radar Detectors [J].
Coluccia, Angelo ;
Fascista, Alessio ;
Ricci, Giuseppe .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 :3903-3916
[7]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[8]  
Danzer A, 2019, IEEE INT C INTELL TR, P61, DOI 10.1109/ITSC.2019.8917000
[9]  
Fisher A, 2019, Arxiv, DOI [arXiv:1801.01489, 10.48550/ARXIV.1801.01489, DOI 10.48550/ARXIV.1801.01489]
[10]   RAMP-CNN: A Novel Neural Network for Enhanced Automotive Radar Object Recognition [J].
Gao, Xiangyu ;
Xing, Guanbin ;
Roy, Sumit ;
Liu, Hui .
IEEE SENSORS JOURNAL, 2021, 21 (04) :5119-5132