NeXtFusion: Attention-Based Camera-Radar Fusion Network for Improved Three-Dimensional Object Detection and Tracking

被引:3
作者
Kalgaonkar, Priyank [1 ]
El-Sharkawy, Mohamed [1 ]
机构
[1] Purdue Sch Engn & Technol, Dept Elect & Comp Engn, Indianapolis, IN 46202 USA
关键词
CondenseNeXt; sensor fusion; object detection; autonomous vehicle; PyTorch;
D O I
10.3390/fi16040114
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Accurate perception is crucial for autonomous vehicles (AVs) to navigate safely, especially in adverse weather and lighting conditions where single-sensor networks (e.g., cameras or radar) struggle with reduced maneuverability and unrecognizable targets. Deep Camera-Radar fusion neural networks offer a promising solution for reliable AV perception under any weather and lighting conditions. Cameras provide rich semantic information, while radars act like an X-ray vision, piercing through fog and darkness. This work proposes a novel, efficient Camera-Radar fusion network called NeXtFusion for robust AV perception with an improvement in object detection accuracy and tracking. Our proposed approach of utilizing an attention module enhances crucial feature representation for object detection while minimizing information loss from multi-modal data. Extensive experiments on the challenging nuScenes dataset demonstrate NeXtFusion's superior performance in detecting small and distant objects compared to other methods. Notably, NeXtFusion achieves the highest mAP score (0.473) on the nuScenes validation set, outperforming competitors like OFT (35.1% improvement) and MonoDIS (9.5% improvement). Additionally, NeXtFusion demonstrates strong performance in other metrics like mATE (0.449) and mAOE (0.534), highlighting its overall effectiveness in 3D object detection. Furthermore, visualizations of nuScenes data processed by NeXtFusion further demonstrate its capability to handle diverse real-world scenarios. These results suggest that NeXtFusion is a promising deep fusion network for improving AV perception and safety for autonomous driving.
引用
收藏
页数:22
相关论文
共 41 条
[21]  
Le Huu-Sy, 2022, 2022 RIVF International Conference on Computing and Communication Technologies (RIVF), P398, DOI 10.1109/RIVF55975.2022.10013923
[22]  
Lin T.-Y, 2018, P IEEE INT C COMPUTE
[23]   Feature Pyramid Networks for Object Detection [J].
Lin, Tsung-Yi ;
Dollar, Piotr ;
Girshick, Ross ;
He, Kaiming ;
Hariharan, Bharath ;
Belongie, Serge .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :936-944
[24]   Path Aggregation Network for Instance Segmentation [J].
Liu, Shu ;
Qi, Lu ;
Qin, Haifang ;
Shi, Jianping ;
Jia, Jiaya .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8759-8768
[25]   SSD: Single Shot MultiBox Detector [J].
Liu, Wei ;
Anguelov, Dragomir ;
Erhan, Dumitru ;
Szegedy, Christian ;
Reed, Scott ;
Fu, Cheng-Yang ;
Berg, Alexander C. .
COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 :21-37
[26]  
Llinas M.L.I., 2017, Handbook of Multisensor Data Fusion: Theory and Practice, V2nd ed.
[27]   Toward Robust Sensing for Autonomous Vehicles: An Adversarial Perspective [J].
Modas, Apostolos ;
Sanchez-Matilla, Ricardo ;
Frossard, Pascal ;
Cavallaro, Andrea .
IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (04) :14-23
[28]   CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection [J].
Nabati, Ramin ;
Qi, Hairong .
2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, :1526-1535
[29]   A Comparative Analysis of Object Detection Metrics with a Companion Open-Source Toolkit [J].
Padilla, Rafael ;
Passos, Wesley L. ;
Dias, Thadeu L. B. ;
Netto, Sergio L. ;
da Silva, Eduardo A. B. .
ELECTRONICS, 2021, 10 (03) :1-28
[30]   Libra R-CNN: Towards Balanced Learning for Object Detection [J].
Pang, Jiangmiao ;
Chen, Kai ;
Shi, Jianping ;
Feng, Huajun ;
Ouyang, Wanli ;
Lin, Dahua .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :821-830