Uncertainty Driven Adaptive Self-Knowledge Distillation for Medical Image Segmentation

被引:0
|
作者
Guo, Xutao [1 ,2 ]
Wang, Mengqi [3 ]
Xiang, Yang [2 ]
Yang, Yanwu [1 ,2 ]
Ye, Chenfei [4 ,5 ]
Wang, Haijun [6 ,7 ]
Ma, Ting [4 ,8 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Elect & Informat Engn, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518066, Peoples R China
[3] Shenzhen Univ, Shenzhen Peoples Hosp 2, Affiliated Hosp 1, Dept Neurooncol, Shenzhen 518037, Peoples R China
[4] Harbin Inst Technol Shenzhen, Sch Biomed Engn & Digital Hlth, Shenzhen 518055, Peoples R China
[5] Harbin Inst Technol, Int Res Inst Artificial Intelligence, Shenzhen 518055, Peoples R China
[6] Sun Yat Sen Univ, Affiliated Hosp 6, Dept Neurosurg, Guangzhou 510655, Peoples R China
[7] First Affiliated Hosp Sun Yat Sen, Dept Neurosurg, Guangzhou 510060, Peoples R China
[8] Harbin Inst Technol, Guangdong Prov Key Lab Aerosp Commun & Networking, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Biomedical imaging; Training; Predictive models; Image segmentation; Adaptation models; Knowledge engineering; Estimation; Semantics; Computational modeling; Medical image segmentation; overfitting; knowledge distillation; uncertainty; cyclic ensembles; NEURAL-NETWORKS;
D O I
10.1109/TETCI.2025.3526259
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has recently significantly improved the precision of medical image segmentation. However, due to the commonly limited dataset scale and reliance on hard labels (one-hot vectors) in medical image segmentation, deep learning models often overfit, which reduces segmentation performance. To mitigate the problem, we propose an uncertainty driven adaptive self-knowledge distillation (UAKD) model for medical image segmentation that regularizes the training process through self-generated soft labels. The innovation of UAKD is to integrate uncertainty estimation into soft label generation and student network training, ensuring accurate supervision and effective regularization. In detail, UAKD introduce teacher network ensembling to reduce semantic bias in soft labels caused by the teacher networks' fitting biases. An adaptive knowledge distillation mechanism is also proposed, which utilizes uncertainty to generate adaptive weights for soft labels to compute the loss function, thereby efficiently transferring reliable knowledge from the teacher network to the student network while suppressing unreliable information. Finally, we introduce a gradient ascent based cyclic ensemble method to reduce teacher network overfitting on the training data, further enhancing the aforementioned teacher ensembling and uncertainty estimation. Experiments on three medical image segmentation tasks show that UAKD outperforms existing regularization methods and demonstrates the effectiveness of uncertainty estimation for assessing soft label reliability.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Self-Knowledge Distillation via Progressive Associative Learning
    Zhao, Haoran
    Bi, Yanxian
    Tian, Shuwen
    Wang, Jian
    Zhang, Peiying
    Deng, Zhaopeng
    Liu, Kai
    ELECTRONICS, 2024, 13 (11)
  • [32] Self-knowledge distillation based on dynamic mixed attention
    Tang, Yuan
    Chen, Ying
    Kongzhi yu Juece/Control and Decision, 2024, 39 (12): : 4099 - 4108
  • [33] Confidence Matters: Enhancing Medical Image Classification Through Uncertainty-Driven Contrastive Self-distillation
    Sharma, Saurabh
    Kumar, Atul
    Chandra, Joydeep
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT X, 2024, 15010 : 133 - 142
  • [34] ROBUST AND ACCURATE OBJECT DETECTION VIA SELF-KNOWLEDGE DISTILLATION
    Xu, Weipeng
    Chu, Pengzhi
    Xie, Renhao
    Xiao, Xiongziyan
    Huang, Hongcheng
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 91 - 95
  • [35] SELF-KNOWLEDGE DISTILLATION VIA FEATURE ENHANCEMENT FOR SPEAKER VERIFICATION
    Liu, Bei
    Wang, Haoyu
    Chen, Zhengyang
    Wang, Shuai
    Qian, Yanmin
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7542 - 7546
  • [36] Informative knowledge distillation for image anomaly segmentation
    Cao, Yunkang
    Wan, Qian
    Shen, Weiming
    Gao, Liang
    KNOWLEDGE-BASED SYSTEMS, 2022, 248
  • [37] Personalized Edge Intelligence via Federated Self-Knowledge Distillation
    Jin, Hai
    Bai, Dongshan
    Yao, Dezhong
    Dai, Yutong
    Gu, Lin
    Yu, Chen
    Sun, Lichao
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (02) : 567 - 580
  • [38] Self-Knowledge Distillation for First Trimester Ultrasound Saliency Prediction
    Gridach, Mourad
    Savochkina, Elizaveta
    Drukker, Lior
    Papageorghiou, Aris T.
    Noble, J. Alison
    SIMPLIFYING MEDICAL ULTRASOUND, ASMUS 2022, 2022, 13565 : 117 - 127
  • [39] Automatic Diabetic Retinopathy Grading via Self-Knowledge Distillation
    Luo, Ling
    Xue, Dingyu
    Feng, Xinglong
    ELECTRONICS, 2020, 9 (09) : 1 - 13
  • [40] Decoupled Feature and Self-Knowledge Distillation for Speech Emotion Recognition
    Yu, Haixiang
    Ning, Yuan
    IEEE ACCESS, 2025, 13 : 33275 - 33285