SparseDet: A Simple and Effective Framework for Fully Sparse LiDAR-Based 3-D Object Detection

被引:1
作者
Liu, Lin [1 ]
Song, Ziying [1 ]
Xia, Qiming [2 ]
Jia, Feiyang [1 ]
Jia, Caiyan [1 ]
Yang, Lei [3 ,4 ]
Gong, Yan [5 ]
Pan, Hongyu [6 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp Sci & Technol, Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[2] Xiamen Univ, Fujian Key Lab Sensing & Comp Smart Cities, Xiamen 361005, Fujian, Peoples R China
[3] Tsinghua Univ, State Key Lab Intelligent Green Vehicle & Mobil, Beijing 100084, Peoples R China
[4] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[5] JD Logist, Autonomous Driving Dept X Div, Beijing 101111, Peoples R China
[6] Horizon Robot, Beijing 100190, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2024年 / 62卷
关键词
Feature extraction; Three-dimensional displays; Point cloud compression; Detectors; Aggregates; Object detection; Computational efficiency; 3-D object detection; feature aggregation; sparse detectors;
D O I
10.1109/TGRS.2024.3468394
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
LiDAR-based sparse 3-D object detection plays a crucial role in autonomous driving applications due to its computational efficiency advantages. Existing methods either use the features of a single central voxel as an object proxy or treat an aggregated cluster of foreground points as an object proxy. However, the former cannot aggregate contextual information, resulting in insufficient information expression in object proxies. The latter relies on multistage pipelines and auxiliary tasks, which reduce the inference speed. To maintain the efficiency of the sparse framework while fully aggregating contextual information, in this work, we propose SparseDet that designs sparse queries as object proxies. It introduces two key modules: the local multiscale feature aggregation (LMFA) module and the global feature aggregation (GFA) module, aiming to fully capture the contextual information, thereby enhancing the ability of the proxies to represent objects. The LMFA module achieves feature fusion across different scales for sparse key voxels via coordinate transformations and using nearest neighbor relationships to capture object-level details and local contextual information, whereas the GFA module uses self-attention mechanisms to selectively aggregate the features of the key voxels across the entire scene for capturing scene-level contextual information. Experiments on nuScenes and KITTI demonstrate the effectiveness of our method. Specifically, SparseDet surpasses the previous best sparse detector VoxelNeXt (a typical method using voxels as object proxies) by 2.2% mean average precision (mAP) with 13.5 frames/s on nuScenes and outperforms VoxelNeXt by 1.12% AP(3-D) on hard level tasks with 17.9 frames/s on KITTI. What is more, not only the mAP of SparseDet exceeds that of FSDV2 (a classical method using clusters of foreground points as object proxies) but also its inference speed is 1.3 times faster than FSDV2 on the nuScenes test set. The code has been released in https://github.com/liulin813/SparseDet.git.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] TEMPORAL AXIAL ATTENTION FOR LIDAR-BASED 3D OBJECT DETECTION IN AUTONOMOUS DRIVING
    Carranza-Garcia, Manuel
    Riquelme, Jose C.
    Zakhor, Avideh
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 201 - 205
  • [22] UADA3D: Unsupervised Adversarial Domain Adaptation for 3D Object Detection With Sparse LiDAR and Large Domain Gaps
    Wozniak, Maciej K.
    Hansson, Mattias
    Thiel, Marko
    Jensfelt, Patric
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (12): : 11210 - 11217
  • [23] Augmenting 3-D Object Detection Through Data Uncertainty-Driven Auxiliary Framework
    Wang, Jianyu
    Zhao, Shengjie
    Liang, Shuang
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 14
  • [24] 3-D Object Detection With Balanced Prediction Based on Contrastive Point Loss
    Tong, Jiaxun
    Liu, Kaiqi
    Bai, Xia
    Li, Wei
    IEEE SENSORS JOURNAL, 2024, 24 (04) : 4969 - 4977
  • [25] Multimodal 3D Object Detection Based on Sparse Interaction in Internet of Vehicles
    Li, Hui
    Ge, Tongao
    Bai, Keqiang
    Nie, Gaofeng
    Xu, Lingwei
    Ai, Xiaoxue
    Cao, Song
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (02) : 2174 - 2186
  • [26] VALO: A Versatile Anytime Framework for LiDAR-Based Object Detection Deep Neural Networks
    Soyyigit, Ahmet
    Yao, Shuochao
    Yun, Heechul
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (11) : 4045 - 4056
  • [27] PDANet: Point Distribution Awareness for 3-D Object Detection From LiDAR Point Clouds
    Tang, Miao
    Yu, Dianyu
    Hu, Qingyong
    Dai, Wenxia
    Xiao, Wen
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [28] RoIFusion: 3D Object Detection From LiDAR and Vision
    Chen, Can
    Fragonara, Luca Zanotti
    Tsourdos, Antonios
    IEEE ACCESS, 2021, 9 (09): : 51710 - 51721
  • [29] CSA-RCNN: Cascaded Self-Attention Networks for High-Quality 3-D Object Detection From LiDAR Point Clouds
    Liu, Ajian
    Yuan, Liang
    Chen, Juan
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [30] MonoSIM: Simulating Learning Behaviors of Heterogeneous Point Cloud Object Detectors for Monocular 3-D Object Detection
    Sun, Han
    Fan, Zhaoxin
    Song, Zhenbo
    Wang, Zhicheng
    Wu, Kejian
    Lu, Jianfeng
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73