ScatterFormer: Efficient Voxel Transformer with Scattered Linear Attention

被引:0
作者
He, Chenhang [1 ]
Li, Ruihuang [1 ,2 ]
Zhang, Guowen [1 ]
Zhang, Lei [1 ,2 ]
机构
[1] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[2] OPPO Res, Shenzhen, Peoples R China
来源
COMPUTER VISION - ECCV 2024, PT XXIX | 2025年 / 15087卷
关键词
3D Object Detection; Voxel Transformer;
D O I
10.1007/978-3-031-73397-0_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Window-based transformers excel in large-scale point cloud understanding by capturing context-aware representations with affordable attention computation in a more localized manner. However, the sparse nature of point clouds leads to a significant variance in the number of voxels per window. Existing methods group the voxels in each window into fixed-length sequences through extensive sorting and padding operations, resulting in a non-negligible computational and memory overhead. In this paper, we introduce ScatterFormer, which to the best of our knowledge, is the first to directly apply attention to voxels across different windows as a single sequence. The key of ScatterFormer is a Scattered Linear Attention (SLA) module, which leverages the pre-computation of key-value pairs in linear attention to enable parallel computation on the variable-length voxel sequences divided by windows. Leveraging the hierarchical structure of GPUs and shared memory, we propose a chunk-wise algorithm that reduces the SLA module's latency to less than 1 millisecond on moderate GPUs. Furthermore, we develop a cross-window interaction module that improves the locality and connectivity of voxel features across different windows, eliminating the need for extensive window shifting. Our proposed ScatterFormer demonstrates 73.8 mAP (L2) on the Waymo Open Dataset and 72.4 NDS on the NuScenes dataset, running at an outstanding detection rate of 23 FPS. The code is available at https://github.com/skyhehe123/ScatterFormer.
引用
收藏
页码:74 / 92
页数:19
相关论文
共 2 条
  • [1] MsSVT plus plus : Mixed-Scale Sparse Voxel Transformer With Center Voting for 3D Object Detection
    Li, Jianan
    Dong, Shaocong
    Ding, Lihe
    Xu, Tingfa
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3736 - 3752
  • [2] TSC-PCAC: Voxel Transformer and Sparse Convolution-Based Point Cloud Attribute Compression for 3D Broadcasting
    Guo, Zixi
    Zhang, Yun
    Zhu, Linwei
    Wang, Hanli
    Jiang, Gangyi
    IEEE TRANSACTIONS ON BROADCASTING, 2025, 71 (01) : 154 - 166