Encoder-Decoder Structure Fusing Depth Information for Outdoor Semantic Segmentation

被引:1
作者
Chen, Songnan [1 ]
Tang, Mengxia [2 ]
Dong, Ruifang [2 ]
Kan, Jiangming [2 ]
机构
[1] Wuhan Polytech Univ, Sch Math & Comp Sci, 36 Huanhu Middle Rd, Wuhan 430048, Peoples R China
[2] Beijing Forestry Univ, Sch Technol, 35 Qinghua East Rd, Beijing 100083, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 17期
关键词
semantic segmentation; RGB-D image; predicted depth map; fusion structure; feature pyramid; NETWORK;
D O I
10.3390/app13179924
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The semantic segmentation of outdoor images is the cornerstone of scene understanding and plays a crucial role in the autonomous navigation of robots. Although RGB-D images can provide additional depth information for improving the performance of semantic segmentation tasks, current state-of-the-art methods directly use ground truth depth maps for depth information fusion, which relies on highly developed and expensive depth sensors. Aiming to solve such a problem, we proposed a self-calibrated RGB-D image semantic segmentation neural network model based on an improved residual network without relying on depth sensors, which utilizes multi-modal information from depth maps predicted with depth estimation models and RGB image fusion for image semantic segmentation to enhance the understanding of a scene. First, we designed a novel convolution neural network (CNN) with an encoding and decoding structure as our semantic segmentation model. The encoder was constructed using IResNet to extract the semantic features of the RGB image and the predicted depth map and then effectively fuse them with the self-calibration fusion structure. The decoder restored the resolution of the output features with a series of successive upsampling structures. Second, we presented a feature pyramid attention mechanism to extract the fused information at multiple scales and obtain features with rich semantic information. The experimental results using the publicly available Cityscapes dataset and collected forest scene images show that our model trained with the estimated depth information can achieve comparable performance to the ground truth depth map in improving the accuracy of the semantic segmentation task and even outperforming some competitive methods.
引用
收藏
页数:17
相关论文
共 43 条
[1]  
AIAwar B., 2022, Remote Sens. Earth Syst. Sci, V5, P141, DOI DOI 10.1007/S41976-022-00072-7
[2]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[3]  
Cao LM, 2021, Arxiv, DOI arXiv:2103.04990
[4]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[5]   Monocular image depth prediction without depth sensors: An unsupervised learning method [J].
Chen, Songnan ;
Tang, Mengxia ;
Kan, Jiangming .
APPLIED SOFT COMPUTING, 2020, 97
[6]  
Cong S.P., 2016, J. Image Graph, V103, P3505
[7]   The Cityscapes Dataset for Semantic Urban Scene Understanding [J].
Cordts, Marius ;
Omran, Mohamed ;
Ramos, Sebastian ;
Rehfeld, Timo ;
Enzweiler, Markus ;
Benenson, Rodrigo ;
Franke, Uwe ;
Roth, Stefan ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3213-3223
[8]  
Duta IC, 2020, Arxiv, DOI arXiv:2004.04989
[9]  
Du CZ, 2023, Arxiv, DOI arXiv:2305.01233
[10]   Scene terrain classification for autonomous vehicle navigation based on semantic segmentation method [J].
Fusic, S. Julius ;
Hariharan, K. ;
Sitharthan, R. ;
Karthikeyan, S. .
TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2022, 44 (13) :2574-2587