FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation

被引:2
|
作者
Yuan, Wenhao [1 ]
Lu, Xiaoyan [1 ]
Zhang, Rongfen [1 ]
Liu, Yuhong [1 ]
机构
[1] Guizhou Univ, Coll Big Data & Informat Engn, Guiyang 550025, Peoples R China
关键词
knowledge distillation; feature condensation; prediction information entropy; feature soft enhancement; semantic segmentation; IMAGES;
D O I
10.3390/e25010125
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
As a popular research subject in the field of computer vision, knowledge distillation (KD) is widely used in semantic segmentation (SS). However, based on the learning paradigm of the teacher-student model, the poor quality of teacher network feature knowledge still hinders the development of KD technology. In this paper, we investigate the output features of the teacher-student network and propose a feature condensation-based KD network (FCKDNet), which reduces pseudo-knowledge transfer in the teacher-student network. First, combined with the pixel information entropy calculation rule, we design a feature condensation method to separate the foreground feature knowledge from the background noise of the teacher network outputs. Then, the obtained feature condensation matrix is applied to the original outputs of the teacher and student networks to improve the feature representation capability. In addition, after performing feature condensation on the teacher network, we propose a soft enhancement method of features based on spatial and channel dimensions to improve the dependency of pixels in the feature maps. Finally, we divide the outputs of the teacher network into spatial condensation features and channel condensation features and perform distillation loss calculation with the student network separately to assist the student network to converge faster. Extensive experiments on the public datasets Pascal VOC and Cityscapes demonstrate that our proposed method improves the baseline by 3.16% and 2.98% in terms of mAcc, and 2.03% and 2.30% in terms of mIoU, respectively, and has better segmentation performance and robustness than the mainstream methods.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] FRKDNet:feature refine semantic segmentation network based on knowledge distillation
    Jiang Shi-yi
    Xu Yang
    Li Dan-yang
    Fan Run-ze
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2023, 38 (11) : 1590 - 1599
  • [2] Channel Affinity Knowledge Distillation for Semantic Segmentation
    Li, Huakun
    Zhang, Yuhang
    Tian, Shishun
    Cheng, Pengfei
    You, Rong
    Zou, Wenbin
    2023 IEEE 25TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, MMSP, 2023,
  • [3] Knowledge distillation for incremental learning in semantic segmentation
    Michieli, Umberto
    Zanuttigh, Pietro
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2021, 205
  • [4] TransKD: Transformer Knowledge Distillation for Efficient Semantic Segmentation
    Liu, Ruiping
    Yang, Kailun
    Roitberg, Alina
    Zhang, Jiaming
    Peng, Kunyu
    Liu, Huayao
    Wang, Yaonan
    Stiefelhagen, Rainer
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (12) : 20933 - 20949
  • [5] Latent domain knowledge distillation for nighttime semantic segmentation
    Liu, Yunan
    Wang, Simiao
    Wang, Chunpeng
    Lu, Mingyu
    Sang, Yu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 132
  • [6] KD-SegNet: Efficient Semantic Segmentation Network with Knowledge Distillation Based on Monocular Camera
    Dang, Thai-Viet
    Bui, Nhu-Nghia
    Tan, Phan Xuan
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (02): : 2001 - 2026
  • [7] Inter-image Discrepancy Knowledge Distillation for Semantic Segmentation
    Chen, Kaijie
    Gou, Jianping
    Li, Lin
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT III, 2024, 14427 : 273 - 284
  • [8] Multi-view knowledge distillation for efficient semantic segmentation
    Wang, Chen
    Zhong, Jiang
    Dai, Qizhu
    Qi, Yafei
    Shi, Fengyuan
    Fang, Bin
    Li, Xue
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2023, 20 (02)
  • [9] Multi-view knowledge distillation for efficient semantic segmentation
    Chen Wang
    Jiang Zhong
    Qizhu Dai
    Yafei Qi
    Fengyuan Shi
    Bin Fang
    Xue Li
    Journal of Real-Time Image Processing, 2023, 20
  • [10] Robust Semantic Segmentation With Multi-Teacher Knowledge Distillation
    Amirkhani, Abdollah
    Khosravian, Amir
    Masih-Tehrani, Masoud
    Kashiani, Hossein
    IEEE ACCESS, 2021, 9 : 119049 - 119066