Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes

被引:2
|
作者
Li, Shuguang [1 ]
Yan, Jiafu [2 ]
Chen, Haoran [1 ]
Zheng, Ke [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu 611731, Peoples R China
关键词
depth estimation; radar; camera; dual-branch network;
D O I
10.3390/s23177560
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Depth estimation is an important part of the perception system in autonomous driving. Current studies often reconstruct dense depth maps from RGB images and sparse depth maps obtained from other sensors. However, existing methods often pay insufficient attention to latent semantic information. Considering the highly structured characteristics of driving scenes, we propose a dual-branch network to predict dense depth maps by fusing radar and RGB images. The driving scene is divided into three parts in the proposed architecture, each predicting a depth map, which is finally merged into one by implementing the fusion strategy in order to make full use of the potential semantic information in the driving scene. In addition, a variant L1 loss function is applied in the training phase, directing the network to focus more on those areas of interest when driving. Our proposed method is evaluated on the nuScenes dataset. Experiments demonstrate its effectiveness in comparison with previous state of the art methods.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Improving Radar-Camera Fusion Network for Distance Estimation
    Samuktha, V.
    Shukla, Hershita
    Kumar, Nitish
    Tejasri, N.
    Reddy, D. Santhosh
    Rajalakshmi, P.
    2024 16TH INTERNATIONAL CONFERENCE ON COMPUTER AND AUTOMATION ENGINEERING, ICCAE 2024, 2024, : 23 - 29
  • [2] An Asymmetric Radar-Camera Fusion Framework for Autonomous Driving
    Su, Zhiyi
    Ming, Binbin
    Hua, Wei
    2023 IEEE SENSORS, 2023,
  • [3] Radar-Camera Pixel Depth Association for Depth Completion
    Long, Yunfei
    Morris, Daniel
    Liu, Xiaoming
    Castro, Marcos
    Chakravarty, Punarjay
    Narayanan, Praveen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12502 - 12511
  • [4] Interactive guidance network for object detection based on radar-camera fusion
    Jiapeng Wang
    Linhua Kong
    Dongxia Chang
    Zisen Kong
    Yao Zhao
    Multimedia Tools and Applications, 2024, 83 : 28057 - 28075
  • [5] Interactive guidance network for object detection based on radar-camera fusion
    Wang, Jiapeng
    Kong, Linhua
    Chang, Dongxia
    Kong, Zisen
    Zhao, Yao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (09) : 28057 - 28075
  • [6] Radar-camera Fusion for Road Target Classification
    Aziz, Kheireddine
    De Greef, Eddy
    Rykunov, Maxim
    Bourdoux, Andre
    Sahli, Hichem
    2020 IEEE RADAR CONFERENCE (RADARCONF20), 2020,
  • [7] Radar-Camera Fusion for Object Detection and Semantic Segmentation in Autonomous Driving: A Comprehensive Review
    Yao, Shanliang
    Guan, Runwei
    Huang, Xiaoyu
    Li, Zhuoxiao
    Sha, Xiangyu
    Yue, Yong
    Lim, Eng Gee
    Seo, Hyungjoon
    Man, Ka Lok
    Zhu, Xiaohui
    Yue, Yutao
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 2094 - 2128
  • [8] Full-Velocity Radar Returns by Radar-Camera Fusion
    Long, Yunfei
    Morris, Daniel
    Liu, Xiaoming
    Castro, Marcos
    Chakravarty, Punarjay
    Narayanan, Praveen
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16178 - 16187
  • [9] RCBEVD: Radar-Camera Fusion in Bird's Eye View for Detection with Velocity Estimation
    Jia, Yansong
    Lee, Christina Dao Wen
    Ang, Marcelo H., Jr.
    INTELLIGENT AUTONOMOUS SYSTEMS 18, VOL 1, IAS18-2023, 2024, 795 : 43 - 55
  • [10] 4D Radar-Camera Sensor Fusion for Robust Vehicle Pose Estimation in Foggy Environments
    Yang, Seunghoon
    Choi, Minseong
    Han, Seungho
    Choi, Keun-Ha
    Kim, Kyung-Soo
    IEEE Access, 2024, 12 : 16178 - 16188