Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes

被引:3
作者
Li, Shuguang [1 ]
Yan, Jiafu [2 ]
Chen, Haoran [1 ]
Zheng, Ke [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu 611731, Peoples R China
关键词
depth estimation; radar; camera; dual-branch network;
D O I
10.3390/s23177560
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Depth estimation is an important part of the perception system in autonomous driving. Current studies often reconstruct dense depth maps from RGB images and sparse depth maps obtained from other sensors. However, existing methods often pay insufficient attention to latent semantic information. Considering the highly structured characteristics of driving scenes, we propose a dual-branch network to predict dense depth maps by fusing radar and RGB images. The driving scene is divided into three parts in the proposed architecture, each predicting a depth map, which is finally merged into one by implementing the fusion strategy in order to make full use of the potential semantic information in the driving scene. In addition, a variant L1 loss function is applied in the training phase, directing the network to focus more on those areas of interest when driving. Our proposed method is evaluated on the nuScenes dataset. Experiments demonstrate its effectiveness in comparison with previous state of the art methods.
引用
收藏
页数:16
相关论文
共 42 条
[21]   Camera, LiDAR, and Radar Sensor Fusion Based on Bayesian Neural Network (CLR-BNN) [J].
Ravindran, Ratheesh ;
Santora, Michael J. ;
Jamali, Mohsin M. .
IEEE SENSORS JOURNAL, 2022, 22 (07) :6964-6974
[22]   A Review of Automatic Driving Target Detection Based on Camera and Millimeter Wave Radar Fusion Technology [J].
Tao, Zhenhua ;
Keng, Ngui Wai .
INTERNATIONAL JOURNAL OF AUTOMOTIVE AND MECHANICAL ENGINEERING, 2025, 22 (01) :11965-11985
[23]   Dataset Generation Process for Enhancing Depth Estimation Network in Autonomous Driving [J].
Ha, Jinsu ;
Jo, Kichun .
IEEE ACCESS, 2024, 12 :121269-121279
[24]   TL-4DRCF: A Two-Level 4-D Radar-Camera Fusion Method for Object Detection in Adverse Weather [J].
Zhang, Haoyi ;
Wu, Kai ;
Chen, Rongkang ;
Wu, Zihao ;
Zhong, Yong ;
Li, Weihua .
IEEE SENSORS JOURNAL, 2024, 24 (10) :16408-16418
[25]   Dense Depth-Map Estimation Based on Fusion of Event Camera and Sparse LiDAR [J].
Cui, Mingyue ;
Zhu, Yuzhang ;
Liu, Yechang ;
Liu, Yunchao ;
Chen, Gang ;
Huang, Kai .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
[26]   ReDepthNet: a radar and camera depth estimation model based on semantic segmentation mask region alignment [J].
Liang, Hong ;
Zhang, Xu ;
Zhang, Qian ;
Shao, Mingwen .
JOURNAL OF SUPERCOMPUTING, 2025, 81 (12)
[27]   Parallel Multi-Scale Semantic-Depth Interactive Fusion Network for Depth Estimation [J].
Fu, Chenchen ;
Sun, Sujunjie ;
Wei, Ning ;
Chau, Vincent ;
Xu, Xueyong ;
Wu, Weiwei .
JOURNAL OF IMAGING, 2025, 11 (07)
[28]   RCDFNet: A 4-D Radar and Camera Dual-Level Fusion Network for 3-D Object Detection [J].
Cheng, Peifeng ;
Yan, Hang ;
Wang, Yukang ;
Wang, Luping .
IEEE SENSORS JOURNAL, 2025, 25 (13) :25798-25809
[29]   MSSF: A 4D Radar and Camera Fusion Framework With Multi-Stage Sampling for 3D Object Detection in Autonomous Driving [J].
Liu, Hongsi ;
Liu, Jun ;
Jiang, Guangfeng ;
Jin, Xin .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025, 26 (06) :8641-8656
[30]   Semantic Segmentation and Depth Estimation with RGB and DVS Sensor Fusion for Multi-view Driving Perception [J].
Natan, Oskar ;
Miura, Jun .
PATTERN RECOGNITION, ACPR 2021, PT I, 2022, 13188 :352-365