SAT-GCN: Self-attention graph convolutional network-based 3D object detection for autonomous driving

被引:72
作者
Wang, Li [1 ,2 ]
Song, Ziying [3 ]
Zhang, Xinyu [1 ,2 ]
Wang, Chenfei [1 ,2 ]
Zhang, Guoxin [4 ]
Zhu, Lei [5 ]
Li, Jun [1 ,2 ]
Liu, Huaping [6 ,7 ]
机构
[1] Tsinghua Univ, State Key Lab Automot Safety & Energy, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[3] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
[4] Hebei Univ Sci & Technol, Sch Informat Sci & Engn, Shijiazhuang 050018, Peoples R China
[5] Mogo Auto Intelligence & Telemet Informat Technol, Beijing 100013, Peoples R China
[6] Tsinghua Univ, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
[7] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
基金
中国国家自然科学基金; 国家高技术研究发展计划(863计划);
关键词
3D object detection; Graph convolutional network; Self-attention mechanism; VEHICLE DETECTION; POINT CLOUD; LIDAR;
D O I
10.1016/j.knosys.2022.110080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate 3D object detection from point clouds is critical for autonomous vehicles. However, point cloud data collected by LiDAR sensors are inherently sparse, especially at long distances. In addition, most existing 3D object detectors extract local features and ignore interactions between features, producing weak semantic information that significantly limits detection performance. We propose a self-attention graph convolutional network (SAT-GCN), which utilizes a GCN and self-attention to enhance semantic representations by aggregating neighborhood information and focusing on vital relationships. SAT-GCN consists of three modules: vertex feature extraction (VFE), self-attention with dimension reduction (SADR), and far distance feature suppression (FDFS). VFE extracts neighboring relationships between features using GCN after encoding a raw point cloud. SADR performs further weight augmentation for crucial neighboring relationships through self-attention. FDFS suppresses meaningless edges formed by sparse point cloud distributions in remote areas and generates corre-sponding global features. Extensive experiments are conducted on the widely used KITTI and nuScenes 3D object detection benchmarks. The results demonstrate significant improvements in mainstream methods, PointPillars, SECOND, and PointRCNN, improving the mean of AP 3D by 4.88%, 5.02%, and 2.79% on KITTI test dataset. SAT-GCN can boost the detection accuracy of the point cloud, especially at medium and long distances. Furthermore, adding the SAT-GCN module has a limited impact on the real-time performance and model parameters.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 78 条
[11]  
Engel N., 2020, ARXIV
[12]  
Engel N., 2020, POINT TRANSFORMER
[13]   Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges [J].
Feng, Di ;
Haase-Schutz, Christian ;
Rosenbaum, Lars ;
Hertlein, Heinz ;
Glaser, Claudius ;
Timm, Fabian ;
Wiesbeck, Werner ;
Dietmayer, Klaus .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (03) :1341-1360
[14]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[15]  
Graham B., 2015, COMPUT SCI
[16]  
Graham B., 2014, COMPUT SCI, V34, P864
[17]   3D Semantic Segmentation with Submanifold Sparse Convolutional Networks [J].
Graham, Benjamin ;
Engelcke, Martin ;
van der Maaten, Laurens .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9224-9232
[18]   Integrating Dense LiDAR-Camera Road Detection Maps by a Multi-Modal CRF Model [J].
Gu, Shuo ;
Zhang, Yigong ;
Tang, Jinhui ;
Yang, Jian ;
Alvarez, Jose M. ;
Kong, Hui .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (12) :11635-11645
[19]  
Guo MH, 2021, Arxiv, DOI arXiv:2012.09688
[20]   Deep Learning for 3D Point Clouds: A Survey [J].
Guo, Yulan ;
Wang, Hanyun ;
Hu, Qingyong ;
Liu, Hao ;
Liu, Li ;
Bennamoun, Mohammed .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (12) :4338-4364