CLONeR: Camera-Lidar Fusion for Occupancy Grid-Aided Neural Representations

被引:4
|
作者
Carlson, Alexandra [1 ,2 ]
Ramanagopal, Manikandasriram S. [3 ]
Tseng, Nathan [1 ,2 ]
Johnson-Roberson, Matthew [4 ]
Vasudevan, Ram [3 ]
Skinner, Katherine A. [3 ]
机构
[1] Univ Michigan, Dept Robot, Ann Arbor, MI 48104 USA
[2] Ford Motor Co, Dearborn, MI 48126 USA
[3] Univ Michigan, Dept Robot, Ann Arbor, MI 48104 USA
[4] Carnegie Mellon Univ, Robot Inst, Pittsburgh, PA 15213 USA
关键词
Laser radar; Cameras; Three-dimensional displays; Solid modeling; Image color analysis; Training; Computational modeling; Deep learning for visual perception; sensor fusion; computer vision for transportation;
D O I
10.1109/LRA.2023.3262139
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Recent advances in neural radiance fields (NeRFs) achieve state-of-the-art novel view synthesis and facilitate dense estimation of scene properties. However, NeRFs often fail for outdoor, unbounded scenes that are captured under very sparse views with the scene content concentrated far away from the camera, as is typical for field robotics applications. In particular, NeRF-style algorithms perform poorly: 1) when there are insufficient views with little pose diversity, 2) when scenes contain saturation and shadows, and 3) when finely sampling large unbounded scenes with fine structures becomes computationally intensive. This letter proposes CLONeR, which significantly improves upon NeRF by allowing it to model large unbounded outdoor driving scenes that are observed from sparse input sensor views. This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively. In addition, this letter proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for volumetric rendering in metric space. Through extensive quantitative and qualitative experiments on scenes from the KITTI dataset, this letter demonstrates that the proposed method outperforms state-of-the-art NeRF models on both novel view synthesis and dense depth prediction tasks when trained on sparse input data.
引用
收藏
页码:2812 / 2819
页数:8
相关论文
共 50 条
  • [1] Efficient Occupancy Grid Mapping and Camera-LiDAR Fusion for Conditional Imitation Learning Driving
    Eraqi, Hesham M.
    Moustafa, Mohamed N.
    Honer, Jens
    2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [2] A Camera-LiDAR Fusion Framework for Traffic Monitoring
    Sochaniwsky, Adrian
    Huangfu, Yixin
    Habibi, Saeid
    Von Mohrenschildt, Martin
    Ahmed, Ryan
    Bhuiyan, Mymoon
    Wyndham-West, Kyle
    Vidal, Carlos
    2024 IEEE TRANSPORTATION ELECTRIFICATION CONFERENCE AND EXPO, ITEC 2024, 2024,
  • [3] Multiple Objects Localization With Camera-LIDAR Sensor Fusion
    Hocaoglu, Gokce Sena
    Benli, Emrah
    IEEE SENSORS JOURNAL, 2025, 25 (07) : 11892 - 11905
  • [4] Camera-LiDAR Data Fusion for Autonomous Mooring Operation
    Subedi, Dipendra
    Jha, Ajit
    Tyapin, Ilya
    Hovland, Geir
    PROCEEDINGS OF THE 15TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2020), 2020, : 1176 - 1181
  • [5] Camera-LiDAR Fusion for Object Detection,Tracking and Prediction
    Huang Y.
    Zhou J.
    Huang Q.
    Li B.
    Wang L.
    Zhu J.
    Wuhan Daxue Xuebao (Xinxi Kexue Ban)/Geomatics and Information Science of Wuhan University, 2024, 49 (06): : 945 - 951
  • [6] Camera-LIDAR Integration: Probabilistic Sensor Fusion for Semantic Mapping
    Berrio, Julie Stephany
    Shan, Mao
    Worrall, Stewart
    Nebot, Eduardo
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) : 7637 - 7652
  • [7] Train Frontal Obstacle Detection Method with Camera-LiDAR Fusion
    Kageyama R.
    Nagamine N.
    Mukojima H.
    Quarterly Report of RTRI (Railway Technical Research Institute), 2022, 63 (03) : 181 - 186
  • [8] Learning Optical Flow and Scene Flow With Bidirectional Camera-LiDAR Fusion
    Liu, Haisong
    Lu, Tao
    Xu, Yihui
    Liu, Jia
    Wang, Limin
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (04) : 2378 - 2395
  • [9] Camera-Lidar sensor fusion for drivable area detection in winter weather using convolutional neural networks
    Rawashdeh, Nathir A.
    Bos, Jeremy P.
    Abu-Alrub, Nader J.
    OPTICAL ENGINEERING, 2023, 62 (03)
  • [10] Camera-LiDAR Fusion Method with Feature Switch Layer for Object Detection Networks
    Kim, Taek-Lim
    Park, Tae-Hyoung
    SENSORS, 2022, 22 (19)