CramNet: Camera-Radar Fusion with Ray-Constrained Cross-Attention for Robust 3D Object Detection

被引:31
作者
Hwang, Jyh-Jing [1 ]
Kretzschmar, Henrik [1 ]
Manela, Joshua [1 ]
Rafferty, Sean [1 ]
Armstrong-Crews, Nicholas [1 ]
Chen, Tiffany [1 ]
Anguelov, Dragomir [1 ]
机构
[1] Waymo, Mountain View, CA 94043 USA
来源
COMPUTER VISION, ECCV 2022, PT XXXVIII | 2022年 / 13698卷
关键词
Sensor fusion; Cross attention; Robust 3D object detection;
D O I
10.1007/978-3-031-19839-7_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robust 3D object detection is critical for safe autonomous driving. Camera and radar sensors are synergistic as they capture complementary information and work well under different environmental conditions. Fusing camera and radar data is challenging, however, as each of the sensors lacks information along a perpendicular axis, that is, depth is unknown to camera and elevation is unknown to radar. We propose the camera-radar matching network CramNet, an efficient approach to fuse the sensor readings from camera and radar in a joint 3D space. To leverage radar range measurements for better camera depth predictions, we propose a novel ray-constrained cross-attention mechanism that resolves the ambiguity in the geometric correspondences between camera features and radar features. Our method supports training with sensor modality dropout, which leads to robust 3D object detection, even when a camera or radar sensor suddenly malfunctions on a vehicle. We demonstrate the effectiveness of our fusion approach through extensive experiments on the RADIATE dataset, one of the few large-scale datasets that provide radar radio frequency imagery. A camera-only variant of our method achieves competitive performance in monocular 3D object detection on the Waymo Open Dataset.
引用
收藏
页码:388 / 405
页数:18
相关论文
共 60 条
[1]  
Ba J. L., 2016, arXiv, DOI 10.48550/arXiv:1607.06450
[2]   Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather [J].
Bijelic, Mario ;
Gruber, Tobias ;
Mannan, Fahim ;
Kraus, Florian ;
Ritter, Werner ;
Dietmayer, Klaus ;
Heide, Felix .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11679-11689
[3]  
Bijelic M, 2018, IEEE INT VEH SYM, P760
[4]   M3D-RPN: Monocular 3D Region Proposal Network for Object Detection [J].
Brazil, Garrick ;
Liu, Xiaoming .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9286-9295
[5]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[6]   Multi-View 3D Object Detection Network for Autonomous Driving [J].
Chen, Xiaozhi ;
Ma, Huimin ;
Wan, Ji ;
Li, Bo ;
Xia, Tian .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6526-6534
[7]   Monocular 3D Object Detection for Autonomous Driving [J].
Chen, Xiaozhi ;
Kundu, Kaustav ;
Zhang, Ziyu ;
Ma, Huimin ;
Fidler, Sanja ;
Urtasun, Raquel .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2147-2156
[8]   Domain Adaptive Image-to-image Translation [J].
Chen, Ying-Cong ;
Xu, Xiaogang ;
Jia, Jiaya .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :5273-5282
[9]  
Graham B, 2017, Arxiv, DOI arXiv:1706.01307
[10]   EPNet: Enhancing Point Features with Image Semantics for 3D Object Detection [J].
Huang, Tengteng ;
Liu, Zhe ;
Chen, Xiwu ;
Bai, Xiang .
COMPUTER VISION - ECCV 2020, PT XV, 2020, 12360 :35-52