FP-RCNN: A Real-Time 3D Target Detection Model based on Multiple Foreground Point Sampling for Autonomous Driving

被引:2
作者
Xu, Guoqing [1 ]
Xu, Xiaolong [2 ]
Gao, Honghao [3 ]
Xiao, Fu [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Jiangsu Key Lab Big Data Secur & Intelligent Proc, Nanjing, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Comp Sci, Nanjing, Peoples R China
[3] Shanghai Univ, Sch Comp Engn & Sci, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous driving; Deep learning; 3D target detection; Instance-aware downsampling; Anchor-free; OBJECT DETECTION;
D O I
10.1007/s11036-023-02092-z
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The perception module of autonomous driving needs to maintain high detection accuracy and speed in various weather conditions. Two-dimensional target detection maintains fast detection speed but poor detection accuracy in bad weather, whereas three-dimensional (3D) target detection can still have a good detection effect in bad weather. However, in the current 3D target detection methods, the accuracy of single-stage detection algorithms is insufficient, and the speed of two-stage detection algorithms is slow. Therefore, in this study, we propose a real-time 3D target detection model based on multiple foreground point sampling for autonomous driving, FP-RCNN. FP-RCNN incorporates features from the original points, voxels, and birds-eye view (BEV), uses sparse convolution at the voxel level, performs multiple progressive downsampling to extract features, and maps the obtained features on BEV to obtain BEV features. A three-layer progressive sampling structure was used for key point sampling. The third layer uses instance-aware downsampling to exploit semantic information to ensure that as many foreground points as possible are collected, and the three features are subjected to VSA operations to obtain the final features bound to sampled key points. The second stage divides the proposed box obtained in the first stage, fuses the contextual information of the original points to obtain the final point features, and outputs the confidence box through two fully connected layers. FP-RCNN is tested on the KITTI dataset, and the test results show a 6% improvement in pedestrian detection and a 50% improvement in detection speed compared with a representative two-stage approach.
引用
收藏
页码:369 / 381
页数:13
相关论文
共 50 条
[41]   Survey on deep learning-based 3D object detection in autonomous driving [J].
Liang, Zhenming ;
Huang, Yingping .
TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2023, 45 (04) :761-776
[42]   PIFNet: 3D Object Detection Using Joint Image and Point Cloud Features for Autonomous Driving [J].
Zheng, Wenqi ;
Xie, Han ;
Chen, Yunfan ;
Roh, Jeongjin ;
Shin, Hyunchul .
APPLIED SCIENCES-BASEL, 2022, 12 (07)
[43]   The Research of 3D Point Cloud Data Clustering Based on MEMS Lidar for Autonomous Driving [J].
Yang, Weikang ;
Dong, Siwei ;
Li, Dagang .
INTERNATIONAL JOURNAL OF AUTOMOTIVE TECHNOLOGY, 2024, 25 (05) :1251-1262
[44]   3D Object Detection Based on Strong Semantic Key Point Sampling [J].
Che, Yunlong ;
Yuan, Liang ;
Sun, Lihui .
Computer Engineering and Applications, 2024, 60 (09) :254-260
[45]   Real-time estimation method of target 3D pose based on multi-branch architecture [J].
Hong Y. ;
Liu J. ;
Luo S. ;
Chen X. ;
Li D. ;
Zhang Q. .
Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology, 2024, 32 (04) :336-345
[46]   Real-time Detection of 3D Objects Based on Multi-Sensor Information Fusion [J].
Xie D. ;
Xu Y. ;
Lu F. ;
Pan S. .
Qiche Gongcheng/Automotive Engineering, 2022, 44 (03) :340-349
[47]   Real-time 3D Object Detection Using Improved Convolutional Neural Network Based on Image-driven Point Cloud [J].
Gao, Zhiyong ;
Xiang, Jianhong .
RECENT ADVANCES IN ELECTRICAL & ELECTRONIC ENGINEERING, 2021, 14 (08) :826-836
[48]   A comprehensive survey of LIDAR-based 3D object detection methods with deep learning for autonomous driving [J].
Zamanakos, Georgios ;
Tsochatzidis, Lazaros ;
Amanatiadis, Angelos ;
Pratikakis, Ioannis .
COMPUTERS & GRAPHICS-UK, 2021, 99 :153-181
[49]   A Deep Learning-Based Perception Algorithm Using 3D LiDAR for Autonomous Driving: Simultaneous Segmentation and Detection Network (SSADNet) [J].
Lee, Yongbeom ;
Park, Seongkeun .
APPLIED SCIENCES-BASEL, 2020, 10 (13)
[50]   3D Target Detection Incorporating Point Cloud Columnarization and Attention Mechanisms in Intelligent Driving Systems [J].
Wang, Hongliang ;
Zhang, Jingzhu .
IEEE ACCESS, 2024, 12 :75124-75135