Filter Fusion: Camera-LiDAR Filter Fusion for 3-D Object Detection With a Robust Fused Head

被引:0
|
作者
Xu, Yaming [1 ]
Li, Boliang [1 ]
Wang, Yan [1 ]
Cui, Yihan [2 ]
机构
[1] Harbin Inst Technol, Sch Astronaut, Harbin 150006, Heilongjiang, Peoples R China
[2] Army Acad Armored Forces, Sergeant Sch, Changchun 130000, Peoples R China
关键词
Three-dimensional displays; Feature extraction; Object detection; Laser radar; Point cloud compression; Detectors; Cameras; Difference function; feature secondary filtering; filter fusion; robust fused head; visual fusion rotating platform;
D O I
10.1109/TIM.2024.3449944
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The different representations of images and point clouds make fusion difficult, resulting in the suboptimal performance of 3-D object detection methods. We propose a camera-light detection and ranging (LiDAR) filter fusion framework for 3-D object detection based on feature secondary filtering. This framework uses two uncoupled object detection structures to extract images and point features and a robust camera-LiDAR fused head to fuse features from multisource heterogeneous sensors. Unlike previous work, we propose a novel four-stage fusion strategy to fully use unique features extracted from two uncoupled 3-D object detectors. Our network fully extracts heterostructural features through dedicated detectors, which makes the extracted information more sufficient, especially for smaller objects. In addition, we propose a difference function for more efficient fusion of independent features from uncoupled object extractors. We mathematically prove the validity of the robust fused head and verify the effectiveness of our filter fusion framework in a test scene and on the KITTI dataset, particularly in KITTI pedestrian detection. The code is available at: https://github.com/xuminglei-hit/FilterFusion
引用
收藏
页数:12
相关论文
共 50 条
  • [11] Joint Multi-Object Detection and Tracking with Camera-LiDAR Fusion for Autonomous Driving
    Huang, Kemiao
    Hao, Qi
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 6983 - 6989
  • [12] CL3D: Camera-LiDAR 3D Object Detection With Point Feature Enhancement and Point-Guided Fusion
    Lin, Chunmian
    Tian, Daxin
    Duan, Xuting
    Zhou, Jianshan
    Zhao, Dezong
    Cao, Dongpu
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) : 18040 - 18050
  • [13] CL-fusionBEV: 3D object detection method with camera-LiDAR fusion in Bird's Eye View
    Shi, Peicheng
    Liu, Zhiqiang
    Dong, Xinlong
    Yang, Aixi
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (06) : 7681 - 7696
  • [14] FAFNs: Frequency-Aware LiDAR-Camera Fusion Networks for 3-D Object Detection
    Wang, Jingxuan
    Lu, Yuanyao
    Jiang, Haiyang
    IEEE SENSORS JOURNAL, 2023, 23 (24) : 30847 - 30857
  • [15] A LiDAR-Camera Fusion 3D Object Detection Algorithm
    Liu, Leyuan
    He, Jian
    Ren, Keyan
    Xiao, Zhonghua
    Hou, Yibin
    INFORMATION, 2022, 13 (04)
  • [16] Train Frontal Obstacle Detection Method with Camera-LiDAR Fusion
    Kageyama R.
    Nagamine N.
    Mukojima H.
    Quarterly Report of RTRI (Railway Technical Research Institute), 2022, 63 (03) : 181 - 186
  • [17] Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
    Tan, Jun
    Li, Jian
    An, Xiangjing
    He, Hangen
    SENSORS, 2014, 14 (05): : 9046 - 9073
  • [18] SupFusion: Supervised LiDAR-Camera Fusion for 3D Object Detection
    Qin, Yiran
    Wang, Chaoqun
    Kang, Zijian
    Ma, Ningning
    Li, Zhen
    Zhang, Ruimao
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 21957 - 21967
  • [19] A spatially enhanced network with camera-lidar fusion for 3D semantic segmentation
    Ye, Chao
    Pan, Huihui
    Yu, Xinghu
    Gao, Huijun
    NEUROCOMPUTING, 2022, 484 : 59 - 66
  • [20] DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion With Deep Association
    Wang, Xiyang
    Fu, Chunyun
    Li, Zhankun
    Lai, Ying
    He, Jiawei
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03): : 8260 - 8267