Real-Time Semantic Segmentation via Spatial-Detail Guided Context Propagation

被引:0
作者
Hao, Shijie [1 ,2 ]
Zhou, Yuan [1 ,2 ]
Guo, Yanrong [1 ,2 ]
Hong, Richang [1 ,2 ]
Cheng, Jun [3 ,4 ]
Wang, Meng [1 ,2 ]
机构
[1] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Minist Educ, Hefei 230009, Peoples R China
[2] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230009, Peoples R China
[3] Chinese Acad Sci, Shenzhen Inst Adv Technol, CAS Key Lab Human Machine Intelligence Synergy Sys, Beijing 100864, Peoples R China
[4] Chinese Univ Hong Kong, Hong Kong, Peoples R China
关键词
Semantics; Computational modeling; Convolution; Pipelines; Costs; Real-time systems; Image segmentation; Accuracy; contextual information; deep learning; semantic segmentation; speed;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nowadays, vision-based computing tasks play an important role in various real-world applications. However, many vision computing tasks, e.g., semantic segmentation, are usually computationally expensive, posing a challenge to the computing systems that are resource-constrained but require fast response speed. Therefore, it is valuable to develop accurate and real-time vision processing models that only require limited computational resources. To this end, we propose the spatial-detail guided context propagation network (SGCPNet) for achieving real-time semantic segmentation. In SGCPNet, we propose the strategy of spatial-detail guided context propagation. It uses the spatial details of shallow layers to guide the propagation of the low-resolution global contexts, in which the lost spatial information can be effectively reconstructed. In this way, the need for maintaining high-resolution features along the network is freed, therefore largely improving the model efficiency. On the other hand, due to the effective reconstruction of spatial details, the segmentation accuracy can be still preserved. In the experiments, we validate the effectiveness and efficiency of the proposed SGCPNet model. On the Cityscapes dataset, for example, our SGCPNet achieves 69.5% mIoU segmentation accuracy, while its speed reaches 178.5 FPS on 768 $\times $ 1536 images on a GeForce GTX 1080 Ti GPU card. In addition, SGCPNet is very lightweight and only contains 0.61 M parameters. The code will be released at https://github.com/zhouyuan888888/SGCPNet.
引用
收藏
页码:4042 / 4053
页数:12
相关论文
empty
未找到相关数据