RVNet: Deep Sensor Fusion of Monocular Camera and Radar for Image-Based Obstacle Detection in Challenging Environments

被引:70
|
作者
John, Vijay [1 ]
Mita, Seiichi [1 ]
机构
[1] Toyota Technol Inst, Nagoya, Aichi, Japan
来源
IMAGE AND VIDEO TECHNOLOGY (PSIVT 2019) | 2019年 / 11854卷
关键词
Sensor fusion; Radar; Monocular camera;
D O I
10.1007/978-3-030-34879-3_27
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Camera and radar-based obstacle detection are important research topics in environment perception for autonomous driving. Camera-based obstacle detection reports state-of-the-art accuracy, but the performance is limited in challenging environments. In challenging environments, the camera features are noisy, limiting the detection accuracy. In comparison, the radar-based obstacle detection methods using the 77GHZ long-range radar are not affected by these challenging environments. However, the radar features are sparse with no delineation of the obstacles. The camera and radar features are complementary, and their fusion results in robust obstacle detection in varied environments. Once calibrated, the radar features can be used for localization of the image obstacles, while the camera features can be used for the delineation of the localized obstacles. We propose a novel deep learning-based sensor fusion framework, termed as the "RVNet", for the effective fusion of the monocular camera and long-range radar for obstacle detection. The RVNet is a single shot object detection network with two input branches and two output branches. The RVNet input branches contain separate branches for the monocular camera and the radar features. The radar features are formulated using a novel feature descriptor, termed as the "sparse radar image". For the output branches, the proposed network contains separate branches for small obstacles and big obstacles, respectively. The validation of the proposed network with state-of-the-art baseline algorithm is performed on the Nuscenes public dataset. Additionally, a detailed parameter analysis is performed with several variants of the RVNet. The experimental results show that the proposed network is better than baseline algorithms in varying environmental conditions.
引用
收藏
页码:351 / 364
页数:14
相关论文
共 50 条
  • [21] Robust Detection and Tracking Method for Moving Object Based on Radar and Camera Data Fusion
    Bai, Jie
    Li, Sen
    Huang, Libo
    Chen, Huanlei
    IEEE SENSORS JOURNAL, 2021, 21 (09) : 10761 - 10774
  • [22] Fast Obstacle Detection System for UAS Based on Complementary Use of Radar and Stereoscopic Camera
    Bigazzi, Luca
    Miccinesi, Lapo
    Boni, Enrico
    Basso, Michele
    Consumi, Tommaso
    Pieraccini, Massimiliano
    DRONES, 2022, 6 (11)
  • [23] Deep Learning Based 3D Object Detection for Automotive Radar and Camera
    Meyer, Michael
    Kuschk, Georg
    2019 16TH EUROPEAN RADAR CONFERENCE (EURAD), 2019, : 133 - 136
  • [24] A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments
    Hu, Jin-wen
    Zheng, Bo-yin
    Wang, Ce
    Zhao, Chun-hui
    Hou, Xiao-lei
    Pan, Quan
    Xu, Zhao
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2020, 21 (05) : 675 - 692
  • [25] A DNN-LSTM based Target Tracking Approach using mmWave Radar and Camera Sensor Fusion
    Sengupta, Arindam
    Jin, Feng
    Cao, Siyang
    PROCEEDINGS OF THE 2019 IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE (NAECON), 2019, : 688 - 693
  • [26] Multi-Target Detection Based on Camera and Radar Feature Fusion Networks
    Chang L.
    Bai J.
    Huang L.
    Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2022, 42 (03): : 318 - 323
  • [27] A sensor fusion system with thermal infrared camera and LiDAR for autonomous vehicles and deep learning based object detection
    Choi, Ji Dong
    Kim, Min Young
    ICT EXPRESS, 2023, 9 (02): : 222 - 227
  • [28] 4D Radar-Camera Sensor Fusion for Robust Vehicle Pose Estimation in Foggy Environments
    Yang, Seunghoon
    Choi, Minseong
    Han, Seungho
    Choi, Keun-Ha
    Kim, Kyung-Soo
    IEEE ACCESS, 2024, 12 : 16178 - 16188
  • [29] Study on Multi-Heterogeneous Sensor Data Fusion Method Based on Millimeter-Wave Radar and Camera
    Duan, Jianyu
    SENSORS, 2023, 23 (13)
  • [30] Integrated Sensor Fusion Based on 4D MIMO Radar and Camera: A Solution for Connected Vehicle Applications
    Lei, Ming
    Yang, Daning
    Weng, Xiaoming
    IEEE VEHICULAR TECHNOLOGY MAGAZINE, 2022, 17 (04): : 38 - 46