FRKDNet:feature refine semantic segmentation network based on knowledge distillation

被引:1
作者
Jiang Shi-yi [1 ]
Xu Yang [1 ,2 ]
Li Dan-yang [1 ]
Fan Run-ze [1 ]
机构
[1] Guizhou Univ, Coll Big Data & Informat Engn, Guiyang 550025, Peoples R China
[2] Guiyang Aluminum Magnesium Design & Res Inst Co L, Guiyang 550009, Peoples R China
关键词
semantic segmentation; neural network; knowledge distillation; feature refine; deep learning;
D O I
10.37188/CJLCD.2023-0010
中图分类号
O7 [晶体学];
学科分类号
0702 ; 070205 ; 0703 ; 080501 ;
摘要
The traditional semantic segmentation knowledge distillation schemes still have problems such as incomplete distillation and insignificant feature information transmission which affect the performance of network,and the complex situation of knowledge transferred by teachers' network which makes it easy to lose the location information of feature. To solve these problems,this paper presents feature refine semantic segmentation network based on knowledge distillation. Firstly,a feature extraction method is designed to separate the foreground content and background noise in the distilled knowledge,and the pseudo knowledge of the teacher network is filtered out to pass more accurate feature content to the student network,so as to improve the performance of the feature. At the same time,the inter-class distance and intra-class distance are extracted in the implicit encoding of the feature space to obtain the corresponding feature coordinate mask. Then,the student network minimizes the output of the feature location with the teacher network by simulating the feature location information,and calculates the distillation loss with the student network respectively,so as to improve the segmentation accuracy of the student network and assist the student network to converge faster. Finally,excellent segmentation performance is achieved on the public datasets Pascal VOC and Cityscapes,and the MIoU reaches 74. 19% and 76. 53% respectively,which is 2. 04% and 4. 48% higher than that of the original student network. Compared with the mainstream methods,the method in this paper has better segmentation performance and robustness,and provides a new method for semantic segmentation knowledge distillation.
引用
收藏
页码:1590 / 1599
页数:10
相关论文
共 24 条
[1]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[2]   The Cityscapes Dataset for Semantic Urban Scene Understanding [J].
Cordts, Marius ;
Omran, Mohamed ;
Ramos, Sebastian ;
Rehfeld, Timo ;
Enzweiler, Markus ;
Benenson, Rodrigo ;
Franke, Uwe ;
Roth, Stefan ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3213-3223
[3]   The Pascal Visual Object Classes (VOC) Challenge [J].
Everingham, Mark ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) :303-338
[4]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[5]  
HE M Y, 2021, Journal of Computer Appli. cations, V41, P25
[6]   Knowledge Adaptation for Efficient Semantic Segmentation [J].
He, Tong ;
Shen, Chunhua ;
Tian, Zhi ;
Gong, Dong ;
Sun, Changming ;
Yan, Youliang .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :578-587
[7]  
Hinton G, 2015, Arxiv, DOI [arXiv:1503.02531, 10.48550/arXiv.1503.02531]
[8]   RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation [J].
Lin, Guosheng ;
Milan, Anton ;
Shen, Chunhua ;
Reid, Ian .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5168-5177
[9]   Structured Knowledge Distillation for Dense Prediction [J].
Liu, Yifan ;
Shu, Changyong ;
Wang, Jingdong ;
Shen, Chunhua .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) :7035-7049
[10]  
Long J, 2015, PROC CVPR IEEE, P3431, DOI 10.1109/CVPR.2015.7298965