FP-RCNN: A Real-Time 3D Target Detection Model based on Multiple Foreground Point Sampling for Autonomous Driving

被引:2
作者
Xu, Guoqing [1 ]
Xu, Xiaolong [2 ]
Gao, Honghao [3 ]
Xiao, Fu [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Jiangsu Key Lab Big Data Secur & Intelligent Proc, Nanjing, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Comp Sci, Nanjing, Peoples R China
[3] Shanghai Univ, Sch Comp Engn & Sci, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous driving; Deep learning; 3D target detection; Instance-aware downsampling; Anchor-free; OBJECT DETECTION;
D O I
10.1007/s11036-023-02092-z
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The perception module of autonomous driving needs to maintain high detection accuracy and speed in various weather conditions. Two-dimensional target detection maintains fast detection speed but poor detection accuracy in bad weather, whereas three-dimensional (3D) target detection can still have a good detection effect in bad weather. However, in the current 3D target detection methods, the accuracy of single-stage detection algorithms is insufficient, and the speed of two-stage detection algorithms is slow. Therefore, in this study, we propose a real-time 3D target detection model based on multiple foreground point sampling for autonomous driving, FP-RCNN. FP-RCNN incorporates features from the original points, voxels, and birds-eye view (BEV), uses sparse convolution at the voxel level, performs multiple progressive downsampling to extract features, and maps the obtained features on BEV to obtain BEV features. A three-layer progressive sampling structure was used for key point sampling. The third layer uses instance-aware downsampling to exploit semantic information to ensure that as many foreground points as possible are collected, and the three features are subjected to VSA operations to obtain the final features bound to sampled key points. The second stage divides the proposed box obtained in the first stage, fuses the contextual information of the original points to obtain the final point features, and outputs the confidence box through two fully connected layers. FP-RCNN is tested on the KITTI dataset, and the test results show a 6% improvement in pedestrian detection and a 50% improvement in detection speed compared with a representative two-stage approach.
引用
收藏
页码:369 / 381
页数:13
相关论文
共 50 条
  • [11] Real-Time Semantic Segmentation of 3D LiDAR Point Clouds for Aircraft Engine Detection in Autonomous Jetbridge Operations
    Weon, Ihnsik
    Lee, Soongeul
    Yoo, Juhan
    APPLIED SCIENCES-BASEL, 2024, 14 (21):
  • [12] TEMPORAL AXIAL ATTENTION FOR LIDAR-BASED 3D OBJECT DETECTION IN AUTONOMOUS DRIVING
    Carranza-Garcia, Manuel
    Riquelme, Jose C.
    Zakhor, Avideh
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 201 - 205
  • [13] 3D object detection based on image and LIDAR fusion for autonomous driving
    Chen G.
    Yi H.
    Mao Z.
    International Journal of Vehicle Information and Communication Systems, 2023, 8 (03) : 237 - 251
  • [14] PLOT: a 3D point cloud object detection network for autonomous driving
    Zhang, Yihuan
    Wang, Liang
    Dai, Yifan
    ROBOTICA, 2023, 41 (05) : 1483 - 1499
  • [15] Real-Time Model Predictive Safety Assessment for Level 3 Autonomous Driving
    Qin, Lei
    Benmokhtar, Rachid
    Perrotton, Xavier
    ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS 6, 2022, 429 : 553 - 565
  • [16] PMPF: Point-Cloud Multiple-Pixel Fusion-Based 3D Object Detection for Autonomous Driving
    Zhang, Yan
    Liu, Kang
    Bao, Hong
    Zheng, Ying
    Yang, Yi
    REMOTE SENSING, 2023, 15 (06)
  • [17] Real-Time Stereo Matching Network Based on 3D Channel and Disparity Attention for Edge Devices Toward Autonomous Driving
    Liang, Bifa
    Yang, Hong
    Huang, Jinhao
    Liu, Cheng
    Yang, Ru
    IEEE ACCESS, 2023, 11 : 76781 - 76792
  • [18] Real-Time Stereo 3D Car Detection With Shape-Aware Non-Uniform Sampling
    Gao, Aqi
    Cao, Jiale
    Pang, Yanwei
    Li, Xuelong
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (04) : 4027 - 4037
  • [19] LiDAR-based 3D Object Detection for Autonomous Driving
    Li, Zirui
    2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 507 - 512
  • [20] Real-time accident anticipation for autonomous driving through monocular depth-enhanced 3D modeling
    Liao, Haicheng
    Li, Yongkang
    Li, Zhenning
    Bian, Zilin
    Lee, Jaeyoung
    Cui, Zhiyong
    Zhang, Guohui
    Xu, Chengzhong
    ACCIDENT ANALYSIS AND PREVENTION, 2024, 207