An Extensible Multi-Sensor Fusion Framework for 3D Imaging

被引:1
|
作者
Siddiqui, Talha Ahmad [1 ]
Madhok, Rishi [1 ]
O'Toole, Matthew [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020) | 2020年
关键词
LIDAR;
D O I
10.1109/CVPRW50498.2020.00512
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many autonomous vehicles rely on an array of sensors for safe navigation, where each sensor captures different visual attributes from the surrounding environment. For example, a single conventional camera captures high-resolution images but no 3D information; a LiDAR provides excellent range information but poor spatial resolution; and a prototype single-photon LiDAR (SP-LiDAR) can provide a dense but noisy representation of the 3D scene. Although the outputs of these sensors vary dramatically (e.g., 2D images, point clouds, 3D volumes), they all derive from the same 3D scene. We propose an extensible sensor fusion framework that (1) lifts the sensor output to volumetric representations of the 3D scene, (2) fuses these volumes together, and (3) processes the resulting volume with a deep neural network to generate a depth (or disparity) map. Although our framework can potentially extend to many types of sensors, we focus on fusing combinations of three imaging systems: monocular/stereo cameras, regular LiDARs, and SP-LiDARs. To train our neural network, we generate a synthetic dataset through CARLA that contains the individual measurements. We also conduct various fusion ablation experiments and evaluate the results of different sensor combinations.
引用
收藏
页码:4344 / 4353
页数:10
相关论文
共 50 条
  • [1] A Multi-Sensor Fusion Framework in 3-d
    Jain, Vishal
    Miller, Andrew C.
    Mundy, Joseph L.
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2013, : 314 - 319
  • [2] Multi-Sensor Depth Fusion Framework for Real-Time 3D Reconstruction
    Ali, Muhammad Kashif
    Raiput, Asif
    Shahzad, Muhammad
    Khan, Farhan
    Akhtar, Faheem
    Borner, Anko
    IEEE ACCESS, 2019, 7 : 136471 - 136480
  • [3] Fusion of multi-sensor passive and active 3D imagery
    Fay, DA
    Verly, JG
    Braun, MI
    Frost, C
    Racamato, JP
    Waxman, AM
    ENHANCED AND SYNTHETIC VISION 2001, 2001, 4363 : 219 - 230
  • [4] Asynchronous Multi-Sensor Fusion for 3D Mapping and Localization
    Geneva, Patrick
    Eckenhoff, Kevin
    Huang, Guoquan
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 5994 - 5999
  • [5] Multi-Task Multi-Sensor Fusion for 3D Object Detection
    Liang, Ming
    Yang, Bin
    Chen, Yun
    Hu, Rui
    Urtasun, Raquel
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7337 - 7345
  • [6] MULTI-SENSOR DATA FUSION FOR REALISTIC AND ACCURATE 3D RECONSTRUCTION
    Hannachi, Ammar
    Kohler, Sophie
    Lallement, Alex
    Hirsch, Ernest
    2014 5TH EUROPEAN WORKSHOP ON VISUAL INFORMATION PROCESSING (EUVIP 2014), 2014,
  • [7] Deep Continuous Fusion for Multi-sensor 3D Object Detection
    Liang, Ming
    Yang, Bin
    Wang, Shenlong
    Urtasun, Raquel
    COMPUTER VISION - ECCV 2018, PT XVI, 2018, 11220 : 663 - 678
  • [8] 3D spatial mapping of roadways based on multi-sensor fusion
    Liu, Feng
    Wang, Hongwei
    Liu, Yu
    Meitan Xuebao/Journal of the China Coal Society, 2024, 49 (09): : 4019 - 4026
  • [9] 3D Point Cloud Generation Based on Multi-Sensor Fusion
    Han, Yulong
    Sun, Haili
    Lu, Yue
    Zhong, Ruofei
    Ji, Changqi
    Xie, Si
    APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [10] Versatile 3D Multi-Sensor Fusion for Lightweight 2D Localization
    Geneva, Patrick
    Merrill, Nathaniel
    Yang, Yulin
    Chen, Chuchu
    Lee, Woosik
    Huang, Guoquan
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 4513 - 4520