Efficient Medical Image Segmentation Based on Knowledge Distillation

被引:121
作者
Qin, Dian [1 ]
Bu, Jia-Jun [1 ]
Liu, Zhe [1 ]
Shen, Xin [1 ]
Zhou, Sheng [2 ]
Gu, Jing-Jun [1 ]
Wang, Zhi-Hua [1 ]
Wu, Lei [1 ]
Dai, Hui-Fen [3 ]
机构
[1] Zhejiang Univ, Coll Comp Sci, Zhejiang Prov Key Lab Serv Robot, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Sch Software Technol, Ningbo Res Inst, Ningbo 315048, Peoples R China
[3] Zhejiang Univ, Sch Med, Affiliated Hosp 4, Yiwu 322000, Peoples R China
基金
中国国家自然科学基金;
关键词
Image segmentation; Biomedical imaging; Semantics; Knowledge engineering; Feature extraction; Tumors; Computer architecture; Knowledge distillation; medical image segmentation; computerized tomography; lightweight neural network; transfer learning; CONVOLUTIONAL NEURAL-NETWORK;
D O I
10.1109/TMI.2021.3098703
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Recent advances have been made in applying convolutional neural networks to achieve more precise prediction results for medical image segmentation problems. However, the success of existing methods has highly relied on huge computational complexity and massive storage, which is impractical in the real-world scenario. To deal with this problem, we propose an efficient architecture by distilling knowledge from well-trained medical image segmentation networks to train another lightweight network. This architecture empowers the lightweight network to get a significant improvement on segmentation capability while retaining its runtime efficiency. We further devise a novel distillation module tailored for medical image segmentation to transfer semantic region information from teacher to student network. It forces the student network to mimic the extent of difference of representations calculated from different tissue regions. This module avoids the ambiguous boundary problem encountered when dealing with medical imaging but instead encodes the internal information of each semantic region for transferring. Benefited from our module, the lightweight network could receive an improvement of up to 32.6% in our experiment while maintaining its portability in the inference phase. The entire structure has been verified on two widely accepted public CT datasets LiTS17 and KiTS19. We demonstrate that a lightweight network distilled by our method has non-negligible value in the scenario which requires relatively high operating speed and low storage usage.
引用
收藏
页码:3820 / 3831
页数:12
相关论文
共 57 条
[1]  
[Anonymous], 2014, INT C LEARN REPR
[2]  
Ashraf K., 2016, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size
[3]  
Bilic P., 2019, Med Image Anal
[4]   Liver tumor segmentation in CT volumes using an adversarial densely connected network [J].
Chen, Lei ;
Song, Hong ;
Wang, Chi ;
Cui, Yutao ;
Yang, Jian ;
Hu, Xiaohua ;
Zhang, Le .
BMC BIOINFORMATICS, 2019, 20 (Suppl 16)
[5]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[6]  
Christ Patrick Ferdinand, 2016, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. 19th International Conference. Proceedings: LNCS 9901, P415, DOI 10.1007/978-3-319-46723-8_48
[7]  
Cicek Ozgun, 2016, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. 19th International Conference. Proceedings: LNCS 9901, P424, DOI 10.1007/978-3-319-46723-8_49
[8]   The Cityscapes Dataset for Semantic Urban Scene Understanding [J].
Cordts, Marius ;
Omran, Mohamed ;
Ramos, Sebastian ;
Rehfeld, Timo ;
Enzweiler, Markus ;
Benenson, Rodrigo ;
Franke, Uwe ;
Roth, Stefan ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3213-3223
[9]   Unpaired Multi-Modal Segmentation via Knowledge Distillation [J].
Dou, Qi ;
Liu, Quande ;
Heng, Pheng Ann ;
Glocker, Ben .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (07) :2415-2425
[10]  
Guo C., 2020, ARXIV200403696