RoIFusion: 3D Object Detection From LiDAR and Vision

被引:34
作者
Chen, Can [1 ]
Fragonara, Luca Zanotti [1 ]
Tsourdos, Antonios [1 ]
机构
[1] Cranfield Univ, Sch Aerosp Transport & Mfg, Cranfield MK43 0AL, Beds, England
来源
IEEE ACCESS | 2021年 / 9卷 / 09期
关键词
Three-dimensional displays; Feature extraction; Two dimensional displays; Object detection; Neural networks; Detectors; Sensor fusion; Sensors fusion; 3D object detection; Region of Interests; neural network; segmentation network; point cloud; image; NETWORK;
D O I
10.1109/ACCESS.2021.3070379
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
When localizing and detecting 3D objects for autonomous driving scenes, obtaining information from multiple sensors (e.g., camera, LIDAR) is capable of mutually offering useful complementary information to enhance the robustness of 3D detectors. In this paper, a deep neural network architecture, named RoIFusion, is proposed to efficiently fuse the multi-modality features for 3D object detection by leveraging the advantages of LIDAR and camera sensors. In order to achieve this task, instead of densely combining the point-wise feature of the point cloud with the related pixel features, our fusion method novelly aggregates a small set of 3D Region of Interests (RoIs) in the point clouds with the corresponding 2D RoIs in the images, which are beneficial for reducing the computation cost and avoiding the viewpoint misalignment during the feature aggregation from different sensors. Finally, Extensive experiments are performed on the KITTI 3D object detection challenging benchmark to show the effectiveness of our fusion method and demonstrate that our deep fusion approach achieves state-of-the-art performance.
引用
收藏
页码:51710 / 51721
页数:12
相关论文
共 50 条
  • [1] Beltrán J, 2018, IEEE INT C INTELL TR, P3517, DOI 10.1109/ITSC.2018.8569311
  • [2] Chen C., 2019, CoRR
  • [3] Go Wider: An Efficient Neural Network for Point Cloud Analysis via Group Convolutions
    Chen, Can
    Fragonara, Luca Zanotti
    Tsourdos, Antonios
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (07):
  • [4] Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709
  • [5] Multi-View 3D Object Detection Network for Autonomous Driving
    Chen, Xiaozhi
    Ma, Huimin
    Wan, Ji
    Li, Bo
    Xia, Tian
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6526 - 6534
  • [6] Chen YL, 2019, IEEE I CONF COMP VIS, P9774, DOI [10.1109/ICCV.2019.00987, 10.1109/iccv.2019.00987]
  • [7] Dai J, 2016, PROCEEDINGS 2016 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), P1796, DOI 10.1109/ICIT.2016.7475036
  • [8] Ding ZP, 2019, LECT NOTES COMPUT SC, V11766, P202, DOI [10.1007/978-3-030-32248-9_23, 10.1007/978-3-030-32248-9]
  • [9] CenterNet: Keypoint Triplets for Object Detection
    Duan, Kaiwen
    Bai, Song
    Xie, Lingxi
    Qi, Honggang
    Huang, Qingming
    Tian, Qi
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6568 - 6577
  • [10] Vision meets robotics: The KITTI dataset
    Geiger, A.
    Lenz, P.
    Stiller, C.
    Urtasun, R.
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) : 1231 - 1237