EPMF: Efficient Perception-Aware Multi-Sensor Fusion for 3D Semantic Segmentation

被引:10
作者
Tan, Mingkui [1 ,2 ]
Zhuang, Zhuangwei [1 ,2 ]
Chen, Sitao [1 ]
Li, Rong [1 ]
Jia, Kui [3 ]
Wang, Qicheng [4 ,5 ]
Li, Yuanqing [2 ]
机构
[1] South China Univ Technol, Sch Software Engn, Guangzhou 510641, Guangdong, Peoples R China
[2] Pazhou Lab, Guangzhou 510335, Peoples R China
[3] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou 510641, Guangdong, Peoples R China
[4] Hong Kong Univ Sci & Technol, Dept Math, Clear Water Bay, Hong Kong, Peoples R China
[5] Minieye, Shenzhen 518063, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Point cloud compression; Laser radar; Cameras; Semantic segmentation; Three-dimensional displays; Feature extraction; Sensors; 3D semantic segmentation; autonomous driving; deep neural networks; multi-sensor fusion; scene understanding;
D O I
10.1109/TPAMI.2024.3402232
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study multi-sensor fusion for 3D semantic segmentation that is important to scene understanding for many applications, such as autonomous driving and robotics. Existing fusion-based methods, however, may not achieve promising performance due to the vast difference between the two modalities. In this work, we investigate a collaborative fusion scheme called perception-aware multi-sensor fusion (PMF) to effectively exploit perceptual information from two modalities, namely, appearance information from RGB images and spatio-depth information from point clouds. To this end, we project point clouds to the camera coordinate using perspective projection, and process both inputs from LiDAR and cameras in 2D space while preventing the information loss of RGB images. Then, we propose a two-stream network to extract features from the two modalities, separately. The extracted features are fused by effective residual-based fusion modules. Moreover, we introduce additional perception-aware losses to measure the perceptual difference between the two modalities. Last, we propose an improved version of PMF, i.e., EPMF, which is more efficient and effective by optimizing data pre-processing and network architecture under perspective projection. Specifically, we propose cross-modal alignment and cropping to obtain tight inputs and reduce unnecessary computational costs. We then explore more efficient contextual modules under perspective projection and fuse the LiDAR features into the camera stream to boost the performance of the two-stream network. Extensive experiments on benchmark data sets show the superiority of our method. For example, on nuScenes test set, our EPMF outperforms the state-of-the-art method, i.e., RangeFormer, by 0.9% in mIoU.
引用
收藏
页码:8258 / 8273
页数:16
相关论文
共 82 条
[51]   Disturbance-immune weight sharing for neural architecture search [J].
Niu, Shuaicheng ;
Wu, Jiaxiang ;
Zhang, Yifan ;
Guo, Yong ;
Zhao, Peilin ;
Huang, Junzhou ;
Tan, Mingkui .
NEURAL NETWORKS, 2021, 144 :553-564
[52]  
Paszke A, 2019, ADV NEUR IN, V32
[53]   Using a Waffle Iron for Automotive Point Cloud Semantic Segmentation [J].
Puy, Gilles ;
Boulch, Alexandre ;
Marlet, Renaud .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, :3356-3366
[54]  
Qi CR, 2017, ADV NEUR IN, V30
[55]   PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation [J].
Qi, Charles R. ;
Su, Hao ;
Mo, Kaichun ;
Guibas, Leonidas J. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :77-85
[56]   RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds [J].
Hu, Qingyong ;
Yang, Bo ;
Xie, Linhai ;
Rosa, Stefano ;
Guo, Yulan ;
Wang, Zhihua ;
Trigoni, Niki ;
Markham, Andrew .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11105-11114
[57]  
Renyi Alfred., 1961, Proc. Fourth Berkeley Symp. on Math. Statist. and Prob, P547
[58]   Towards 3D Point cloud based object maps for household environments [J].
Rusu, Radu Bogdan ;
Marton, Zoltan Csaba ;
Blodow, Nico ;
Dolha, Mihai ;
Beetz, Michael .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2008, 56 (11) :927-941
[59]  
Shan TX, 2018, IEEE INT C INT ROBOT, P4758, DOI 10.1109/IROS.2018.8594299
[60]  
Simonyan K, 2014, ADV NEUR IN, V27