FP-RCNN: A Real-Time 3D Target Detection Model based on Multiple Foreground Point Sampling for Autonomous Driving

被引:2
作者
Xu, Guoqing [1 ]
Xu, Xiaolong [2 ]
Gao, Honghao [3 ]
Xiao, Fu [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Jiangsu Key Lab Big Data Secur & Intelligent Proc, Nanjing, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Comp Sci, Nanjing, Peoples R China
[3] Shanghai Univ, Sch Comp Engn & Sci, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous driving; Deep learning; 3D target detection; Instance-aware downsampling; Anchor-free; OBJECT DETECTION;
D O I
10.1007/s11036-023-02092-z
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The perception module of autonomous driving needs to maintain high detection accuracy and speed in various weather conditions. Two-dimensional target detection maintains fast detection speed but poor detection accuracy in bad weather, whereas three-dimensional (3D) target detection can still have a good detection effect in bad weather. However, in the current 3D target detection methods, the accuracy of single-stage detection algorithms is insufficient, and the speed of two-stage detection algorithms is slow. Therefore, in this study, we propose a real-time 3D target detection model based on multiple foreground point sampling for autonomous driving, FP-RCNN. FP-RCNN incorporates features from the original points, voxels, and birds-eye view (BEV), uses sparse convolution at the voxel level, performs multiple progressive downsampling to extract features, and maps the obtained features on BEV to obtain BEV features. A three-layer progressive sampling structure was used for key point sampling. The third layer uses instance-aware downsampling to exploit semantic information to ensure that as many foreground points as possible are collected, and the three features are subjected to VSA operations to obtain the final features bound to sampled key points. The second stage divides the proposed box obtained in the first stage, fuses the contextual information of the original points to obtain the final point features, and outputs the confidence box through two fully connected layers. FP-RCNN is tested on the KITTI dataset, and the test results show a 6% improvement in pedestrian detection and a 50% improvement in detection speed compared with a representative two-stage approach.
引用
收藏
页码:369 / 381
页数:13
相关论文
共 50 条
[11]   CrossFusion net: Deep 3D object detection based on RGB images and point clouds in autonomous driving [J].
Hong, Dza-Shiang ;
Chen, Hung-Hao ;
Hsiao, Pei-Yung ;
Fu, Li-Chen ;
Siao, Siang-Min .
IMAGE AND VISION COMPUTING, 2020, 100
[12]   Object detection algorithm for autonomous driving: Design and real-time performance analysis of AttenRetina model [J].
Liu, Gang ;
Jiang, Weiqiang ;
Sun, Changlin ;
Ning, Na ;
Wang, Rui ;
Buhari, Abudhahir .
ALEXANDRIA ENGINEERING JOURNAL, 2025, 123 :392-402
[13]   Real-Time Semantic Segmentation of 3D LiDAR Point Clouds for Aircraft Engine Detection in Autonomous Jetbridge Operations [J].
Weon, Ihnsik ;
Lee, Soongeul ;
Yoo, Juhan .
APPLIED SCIENCES-BASEL, 2024, 14 (21)
[14]   TEMPORAL AXIAL ATTENTION FOR LIDAR-BASED 3D OBJECT DETECTION IN AUTONOMOUS DRIVING [J].
Carranza-Garcia, Manuel ;
Riquelme, Jose C. ;
Zakhor, Avideh .
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, :201-205
[15]   3D object detection based on image and LIDAR fusion for autonomous driving [J].
Chen G. ;
Yi H. ;
Mao Z. .
International Journal of Vehicle Information and Communication Systems, 2023, 8 (03) :237-251
[16]   PLOT: a 3D point cloud object detection network for autonomous driving [J].
Zhang, Yihuan ;
Wang, Liang ;
Dai, Yifan .
ROBOTICA, 2023, 41 (05) :1483-1499
[17]   PMPF: Point-Cloud Multiple-Pixel Fusion-Based 3D Object Detection for Autonomous Driving [J].
Zhang, Yan ;
Liu, Kang ;
Bao, Hong ;
Zheng, Ying ;
Yang, Yi .
REMOTE SENSING, 2023, 15 (06)
[18]   Real-Time Model Predictive Safety Assessment for Level 3 Autonomous Driving [J].
Qin, Lei ;
Benmokhtar, Rachid ;
Perrotton, Xavier .
ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS 6, 2022, 429 :553-565
[19]   Real-Time Stereo 3D Car Detection With Shape-Aware Non-Uniform Sampling [J].
Gao, Aqi ;
Cao, Jiale ;
Pang, Yanwei ;
Li, Xuelong .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (04) :4027-4037
[20]   Real-Time Stereo Matching Network Based on 3D Channel and Disparity Attention for Edge Devices Toward Autonomous Driving [J].
Liang, Bifa ;
Yang, Hong ;
Huang, Jinhao ;
Liu, Cheng ;
Yang, Ru .
IEEE ACCESS, 2023, 11 :76781-76792