VISTA: Boosting 3D Object Detection via Dual Cross-VIew SpaTial Attention

被引:54
作者
Deng, Shengheng [1 ]
Liang, Zhihao [1 ,3 ]
Sun, Lin [2 ]
Jia, Kui [1 ,4 ]
机构
[1] South China Univ Technol, Guangzhou, Peoples R China
[2] Mag Leap, Sunnyvale, CA USA
[3] DexForce Technol Co Ltd, Shenzhen, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.00826
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Detecting objects from LiDAR point clouds is of tremendous significance in autonomous driving. In spite of good progress, accurate and reliable 3D detection is yet to be achieved due to the sparsity and irregularity of LiDAR point clouds. Among existing strategies, multi-view methods have shown great promise by leveraging the more comprehensive information from both bird's eye view (BEV) and range view (RV). These multi-view methods either refine the proposals predicted from single view via fused features, or fuse the features without considering the global spatial context; their performance is limited consequently. In this paper, we propose to adaptively fuse multi-view features in a global spatial context via Dual Cross-VIew SpaTial Attention (VISTA). The proposed VISTA is a novel plug-and-play fusion module, wherein the multi-layer perceptron widely adopted in standard attention modules is replaced with a convolutional one. Thanks to the learned attention mechanism, VISTA can produce fused features of high quality for prediction of proposals. We decouple the classification and regression tasks in VISTA, and an additional constraint of attention variance is applied that enables the attention module to focus on specific targets instead of generic points. We conduct thorough experiments on the benchmarks of nuScenes and Waymo; results confirm the efficacy of our designs. At the time of submission, our method achieves 63.0% in overall mAP and 69.8% in NDS on the nuScenes benchmark, outperforming all published methods by up to 24% in safety-crucial categories such as cyclist.
引用
收藏
页码:8438 / 8447
页数:10
相关论文
共 34 条
[1]  
[Anonymous], 2014, ARXIV14096070
[2]  
Bewley A., 2020, C ROB LEARN
[3]  
Caesar H, 2020, PROC CVPR IEEE, P11618, DOI 10.1109/CVPR42600.2020.01164
[4]  
Carion N., 2020, EUROPEAN C COMPUTER
[5]  
Chen Ke, 2020, INT C INT ROB SYST
[6]  
Chen Q., 2020, Advances in Neural Information Processing Systems
[7]  
Chen X., 2017, PROC CVPR IEEE, V1, P3, DOI [DOI 10.1109/CVPR.2017.691, 10.1109/CVPR.2017.691]
[8]  
Dosovitskiy A., 2020, INT C LEARN REPR
[9]  
Graham B., 2017, Submanifold sparse convolutional networks
[10]   3D Semantic Segmentation with Submanifold Sparse Convolutional Networks [J].
Graham, Benjamin ;
Engelcke, Martin ;
van der Maaten, Laurens .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9224-9232