Filter Fusion: Camera-LiDAR Filter Fusion for 3-D Object Detection With a Robust Fused Head

被引:0
作者
Xu, Yaming [1 ]
Li, Boliang [1 ]
Wang, Yan [1 ]
Cui, Yihan [2 ]
机构
[1] Harbin Inst Technol, Sch Astronaut, Harbin 150006, Heilongjiang, Peoples R China
[2] Army Acad Armored Forces, Sergeant Sch, Changchun 130000, Peoples R China
关键词
Three-dimensional displays; Feature extraction; Object detection; Laser radar; Point cloud compression; Detectors; Cameras; Difference function; feature secondary filtering; filter fusion; robust fused head; visual fusion rotating platform;
D O I
10.1109/TIM.2024.3449944
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The different representations of images and point clouds make fusion difficult, resulting in the suboptimal performance of 3-D object detection methods. We propose a camera-light detection and ranging (LiDAR) filter fusion framework for 3-D object detection based on feature secondary filtering. This framework uses two uncoupled object detection structures to extract images and point features and a robust camera-LiDAR fused head to fuse features from multisource heterogeneous sensors. Unlike previous work, we propose a novel four-stage fusion strategy to fully use unique features extracted from two uncoupled 3-D object detectors. Our network fully extracts heterostructural features through dedicated detectors, which makes the extracted information more sufficient, especially for smaller objects. In addition, we propose a difference function for more efficient fusion of independent features from uncoupled object extractors. We mathematically prove the validity of the robust fused head and verify the effectiveness of our filter fusion framework in a test scene and on the KITTI dataset, particularly in KITTI pedestrian detection. The code is available at: https://github.com/xuminglei-hit/FilterFusion
引用
收藏
页数:12
相关论文
共 49 条
  • [1] Ansari Md Afzal, 2022, Advanced Machine Intelligence and Signal Processing. Lecture Notes in Electrical Engineering (858), P419, DOI 10.1007/978-981-19-0840-8_31
  • [2] Multimodal vehicle detection: fusing 3D-LIDAR and color camera data
    Asvadi, Alireza
    Garrote, Luis
    Premebida, Cristiano
    Peixoto, Paulo
    Nunes, Urbano J.
    [J]. PATTERN RECOGNITION LETTERS, 2018, 115 : 20 - 29
  • [3] TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers
    Bai, Xuyang
    Hu, Zeyu
    Zhu, Xinge
    Huang, Qingqiu
    Chen, Yilun
    Fu, Hangbo
    Tai, Chiew-Lan
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1080 - 1089
  • [4] Monocular 3D Object Detection for Autonomous Driving
    Chen, Xiaozhi
    Kundu, Kaustav
    Zhang, Ziyu
    Ma, Huimin
    Fidler, Sanja
    Urtasun, Raquel
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2147 - 2156
  • [5] An Efficient mmW Frequency-Domain Imaging Algorithm for Near-Field Scanning 1-D SIMO/MIMO Array
    Chen, Xu
    Wang, Hongqiang
    Yang, Qi
    Zeng, Yang
    Deng, Bin
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [6] EPDet: Enhancing point clouds features with effective representation for 3D object detection
    Chen, Yidong
    Cai, Guorong
    Xia, Qiming
    Liu, Zhaoliang
    Zeng, Binghui
    Zhang, Zongliang
    Li, Jonathan
    Wang, Zongyue
    [J]. INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 127
  • [7] Contributors M., 2020, MMDetection3D: OpenMMLab next-generation platform for general 3D object detection
  • [8] Dense Depth-Map Estimation Based on Fusion of Event Camera and Sparse LiDAR
    Cui, Mingyue
    Zhu, Yuzhang
    Liu, Yechang
    Liu, Yunchao
    Chen, Gang
    Huang, Kai
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [9] Object Classification Using CNN-Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment
    Gao, Hongbo
    Cheng, Bo
    Wang, Jianqiang
    Li, Keqiang
    Zhao, Jianhui
    Li, Deyi
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (09) : 4224 - 4231
  • [10] He Y., 2022, P EUR C COMP VIS, P726