SEGANet: 3D object detection with shape-enhancement and geometry-aware network

被引:5
作者
Zhou, Jing [1 ]
Hu, Yiyu [1 ]
Lai, Zhongyuan [1 ]
Wang, Tianjiang [2 ]
机构
[1] Jianghan Univ, Sch Artificial Intelligence, Wuhan 430056, Hubei, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
3d object detection; Weakly sensing objects; Point cloud completion; Sparse attention; Transformer;
D O I
10.1016/j.compeleceng.2023.108888
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
3D object detection approaches from point clouds develop rapidly. However, the distribution of point clouds is unbalanced in the real scene, and thus the distant or occluded objects suffer from too few points to be perceived. This case damages the overall detection accuracy. Hence, we propose a novel two-stage 3D object detection framework, the Shape-Enhancement and Geometry-Aware Network (SEGANet), which aims to mitigate the negative impact of unbalanced point distribution for boosting detection performance. In stage 1, we first capture fine-grained structural knowledge with the assistance of point-wise features from voxels to generate proposals. And in stage 2, we construct a shape enhancement module to reconstruct complete surface points for objects within proposals, then establish an elaborate geometric relevance-aware Transformer module to aggregate high-correlated feature pairs of reconstructed-known parts and decode vital geometric relations of aggregated features. Thus, critical geometric clues are supplied for objects from the data and feature levels, achieving enhanced features for box refinement. Extensive experiments on KITTI and Waymo datasets show that SEGANet achieves low model complexity and excellent detection accuracy, especially leading the baseline method by 2.18% gain in overall detection accuracy and 1.8% gain in average accuracy of weakly sensing objects. This verifies that SEGANet effectively alleviates the impact of point imbalance to significantly boost detection performance.
引用
收藏
页数:19
相关论文
共 30 条
[11]   Group-Free 3D Object Detection via Transformers [J].
Liu, Ze ;
Zhang, Zheng ;
Cao, Yue ;
Hu, Han ;
Tong, Xin .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :2929-2938
[12]   An End-to-End Transformer Model for 3D Object Detection [J].
Misra, Ishan ;
Girdhar, Rohit ;
Joulin, Armand .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :2886-2897
[13]   HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object Detection [J].
Noh, Jongyoun ;
Lee, Sanghoon ;
Ham, Bumsub .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :14600-14609
[14]   3D Object Detection with Pointformer [J].
Pan, Xuran ;
Xia, Zhuofan ;
Song, Shiji ;
Li, Li Erran ;
Huang, Gao .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :7459-7468
[15]   BADet: Boundary-Aware 3D Object Detection from Point Clouds [J].
Qian, Rui ;
Lai, Xin ;
Li, Xirong .
PATTERN RECOGNITION, 2022, 125
[16]   Improving 3D Object Detection with Channel-wise Transformer [J].
Sheng, Hualian ;
Cai, Sijia ;
Liu, Yuan ;
Deng, Bing ;
Huang, Jianqiang ;
Hua, Xian-Sheng ;
Zhao, Min-Jian .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :2723-2732
[17]  
Shi SS, 2020, PROC CVPR IEEE, P10526, DOI 10.1109/CVPR42600.2020.01054
[18]   PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud [J].
Shi, Shaoshuai ;
Wang, Xiaogang ;
Li, Hongsheng .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :770-779
[19]   From Points to Parts: 3D Object Detection From Point Cloud With Part-Aware and Part-Aggregation Network [J].
Shi, Shaoshuai ;
Wang, Zhe ;
Shi, Jianping ;
Wang, Xiaogang ;
Li, Hongsheng .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (08) :2647-2664
[20]   Point cloud classification based on transformer [J].
Wu, Xianfeng ;
Liu, Xinyi ;
Wang, Junfei ;
Lai, Zhongyuan ;
Zhou, Jing ;
Liu, Xia .
COMPUTERS & ELECTRICAL ENGINEERING, 2022, 104