Sparsity-Robust Feature Fusion for Vulnerable Road-User Detection with 4D Radar

被引:2
作者
Ruddat, Leon [1 ]
Reichardt, Laurenz [1 ]
Ebert, Nikolas [1 ,2 ]
Wasenmueller, Oliver [1 ]
机构
[1] Mannheim Univ Appl Sci, Res & Transfer Ctr CeMOS, D-68163 Mannheim, Germany
[2] RPTU Kaiserslautern Landau, Dept Comp Sci, D-67663 Kaiserslautern, Germany
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 07期
关键词
4D radar; 3D object detection; attention; NETWORK;
D O I
10.3390/app14072781
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Detecting vulnerable road users is a major challenge for autonomous vehicles due to their small size. Various sensor modalities have been investigated, including mono or stereo cameras and 3D LiDAR sensors, which are limited by environmental conditions and hardware costs. Radar sensors are a low-cost and robust option, with high-resolution 4D radar sensors being suitable for advanced detection tasks. However, they involve challenges such as few and irregularly distributed measurement points and disturbing artifacts. Learning-based approaches utilizing pillar-based networks show potential in overcoming these challenges. However, the severe sparsity of radar data makes detecting small objects with only a few points difficult. We extend a pillar network with our novel Sparsity-Robust Feature Fusion (SRFF) neck, which combines high- and low-level multi-resolution features through a lightweight attention mechanism. While low-level features aid in better localization, high-level features allow for better classification. As sparse input data are propagated through a network, the increasing effective receptive field leads to feature maps of different sparsities. The combination of features with different sparsities improves the robustness of the network for classes with few points.
引用
收藏
页数:11
相关论文
共 38 条
[21]  
Schumann O., 2021, P IEEE INT C INFORM, P1
[22]   RADIATE A Radar Dataset for Automotive Perception in Bad Weather [J].
Sheeny, Marcel ;
De Pellegrin, Emanuele ;
Mukherjee, Saptarshi ;
Ahrabian, Alireza ;
Wang, Sen ;
Wallace, Andrew .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :5617-5623
[23]   Object Detection by Attention-Guided Feature Fusion Network [J].
Shi, Yuxuan ;
Fan, Yue ;
Xu, Siqi ;
Gao, Yue ;
Gao, Ran .
SYMMETRY-BASEL, 2022, 14 (05)
[24]   Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates [J].
Smith, Leslie N. ;
Topin, Nicholay .
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
[25]  
Team O.D, Openpcdet: An open-source toolbox for 3d object detection from point clouds
[26]   Sparsity Invariant CNNs [J].
Uhrig, Jonas ;
Schneider, Nick ;
Schneider, Lukas ;
Franke, Uwe ;
Brox, Thomas ;
Geiger, Andreas .
PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2017, :11-20
[27]  
Wang LC, 2020, IEEE INT VEH SYM, P1615, DOI [10.1109/IV47402.2020.9304655, 10.1109/iv47402.2020.9304655]
[28]   RODNet: Radar Object Detection using Cross-Modal Supervision [J].
Wang, Yizhou ;
Jiang, Zhongyu ;
Gao, Xiangyu ;
Hwang, Jenq-Neng ;
Xing, Guanbin ;
Liu, Hui .
2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, :504-513
[29]   CBAM: Convolutional Block Attention Module [J].
Woo, Sanghyun ;
Park, Jongchan ;
Lee, Joon-Young ;
Kweon, In So .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :3-19
[30]   Accurate and complete line segment extraction for large-scale point clouds [J].
Xin, Xiaopeng ;
Huang, Wei ;
Zhong, Saishang ;
Zhang, Ming ;
Liu, Zheng ;
Xie, Zhong .
INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 128