Transformer-Based Optimized Multimodal Fusion for 3D Object Detection in Autonomous Driving

被引:4
作者
Alaba, Simegnew Yihunie [1 ]
Ball, John E. [1 ]
机构
[1] Mississippi State Univ, James Worth Bagley Coll Engn, Dept Elect & Comp Engn, Starkville, MS 39762 USA
关键词
Laser radar; Three-dimensional displays; Transformers; Point cloud compression; Object detection; Feature extraction; Cameras; Autonomous driving; LiDAR; multimodal fusion; network compression; pruning; quantization; quantization-aware training; sparsity; vision transformer; 3D object detection;
D O I
10.1109/ACCESS.2024.3385439
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Accurate 3D object detection is vital for autonomous driving since it facilitates accurate perception of the environment through multiple sensors. Although cameras can capture detailed color and texture features, they have limitations regarding depth information. Additionally, they can struggle under adverse weather or lighting conditions. In contrast, LiDAR sensors offer robust depth information but lack the visual detail for precise object classification. This work presents a multimodal fusion model that improves 3D object detection by combining the benefits of LiDAR and camera sensors to address these challenges. This model processes camera images and LiDAR point cloud data into a voxel-based representation, further refined by encoder networks to enhance spatial interaction and reduce semantic ambiguity. The proposed multiresolution attention module and integration of discrete wavelet transform and inverse discrete wavelet transform to the image backbone improve the feature extraction capability. This approach enhances the fusion of LiDAR depth information with the camera's textural and color detail. The model also incorporates a transformer decoder network with self-attention and cross-attention mechanisms, fostering robust and accurate detection through global interaction between identified objects and encoder features. Furthermore, the proposed network is refined with advanced optimization techniques, including pruning and Quantization-Aware Training (QAT), to maintain a competitive performance while significantly decreasing the need for memory and computational resources. Performance evaluations on the nuScenes dataset show that the optimized model architecture offers competitive results and significantly improves operational efficiency and effectiveness in multimodal fusion 3D object detection.
引用
收藏
页码:50165 / 50176
页数:12
相关论文
共 60 条
[1]  
Alaba A., 2022, TechRxiv
[2]  
Alaba J. E., 2023, Proc. SPIE, V12540, P36
[3]   Deep Learning-Based Image 3-D Object Detection for Autonomous Driving: Review [J].
Alaba, Simegnew Yihunie ;
Ball, John E. .
IEEE SENSORS JOURNAL, 2023, 23 (04) :3378-3394
[4]   A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving [J].
Alaba, Simegnew Yihunie ;
Ball, John E. .
SENSORS, 2022, 22 (24)
[5]   WCNN3D: Wavelet Convolutional Neural Network-Based 3D Object Detection for Autonomous Driving [J].
Alaba, Simegnew Yihunie ;
Ball, John E. .
SENSORS, 2022, 22 (18)
[6]   TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers [J].
Bai, Xuyang ;
Hu, Zeyu ;
Zhu, Xinge ;
Huang, Qingqiu ;
Chen, Yilun ;
Fu, Hangbo ;
Tai, Chiew-Lan .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :1080-1089
[7]  
Barrera A, 2020, IEEE INT C INTELL TR
[8]  
Bengio Y, 2013, Arxiv, DOI arXiv:1308.3432
[9]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[10]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229