Deep Transfer Learning from Constrained Source to Target Domains in Medical Image Segmentation

被引:0
作者
Krishnan, Chetana [1 ]
Schmidt, Emma [1 ]
Onuoha, Ezinwanne [1 ]
Mullen, Sean [2 ]
Roye, Ronald [2 ]
Chumley, Phillip [2 ]
Mrug, Michal [2 ,3 ]
Cardenas, Carlos E. [4 ]
Kim, Harrison [5 ,6 ]
机构
[1] Univ Alabama Birmingham, Dept Biomed Engn, Birmingham, AL 35294 USA
[2] Univ Alabama Birmingham, Div Nephrol, Birmingham, AL 35294 USA
[3] Dept Vet Affairs Med Ctr, Birmingham, AL 35233 USA
[4] Univ Alabama Birmingham, Dept Radiat Oncol & Radiol, Birmingham, AL 35294 USA
[5] Univ Alabama Birmingham, Dept Biomed Engn, Birmingham, AL 35294 USA
[6] Univ Alabama Birmingham, Dept Radiol, Birmingham, AL 35294 USA
关键词
transfer learning; medical image segmentation; autosomal polycystic kidney disease; deep learning; DOMINANT POLYCYSTIC KIDNEY; CONVOLUTIONAL NEURAL-NETWORKS; DISEASE; CLASSIFICATION; PERFORMANCE; CONSORTIUM;
D O I
10.2352/J.ImagingSci.Technol.2024.68.6.060505
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
The aim of this work is to transfer the model trained on magnetic resonance images of human autosomal dominant polycystic kidney disease (ADPKD) to rat and mouse ADPKD models. A dataset of 756 MRI images of ADPKD kidneys was employed to train a modified UNet3+ architecture, which incorporated residual layers, switchable normalization, and concatenated skip connections for kidney and cyst segmentation tasks. The trained model was then subjected to transfer learning (TL) using data from two commonly utilized animal PKD models: the Pkdh1pck (PCK) rat and the Pkd1RC/RC (RC) mouse. Transfer learning achieved Dice similarity coefficients of 0.93 +/- 0.04 and 0.63 +/- 0.16 (mean +/- SD) for a sample combination of PCK+RC kidneys and cysts, respectively, on the test datasets of animal images. We showcased the utilization of TL in situations involving constrained source and target datasets and have achieved good accuracy in the cases of class imbalance.
引用
收藏
页数:10
相关论文
共 38 条
  • [1] Chen X., Wang X., Zhang K., Fung K. M., Thai T. C., Moore K., Mannel R. S., Liu H., Zheng B., Qiu Y., Med. Image Anal, 79, (2022)
  • [2] Anwar S. M., Majid M., Qayyum A., Awais M., Alnowami M., Khan M. K., J. Med. Syst, 42, (2018)
  • [3] Kim H. E., Cosa-Linan A., Santhanam N., Jannesari M., Maros M. E., Ganslandt T., BMC Med. Imaging, 22, (2022)
  • [4] Bernard O., Lalande A., Zotti C., Cervenansky F., Yang X., Heng P. A., Cetin I., Lekadir K., Camara O., Gonzalez Ballester M. A., Sanroma G., Napel S., Petersen S., Tziritas G., Grinias E., Khened M., Kollerathu V. A., Krishnamurthi G., Rohe M. M., Pennec X., Sermesant M., Isensee F., Jager P., Maier-Hein K. H., Full P. M., Wolf I., Engelhardt S., Baumgartner C. F., Koch L. M., Wolterink J. M., Isgum I., Jang Y., Hong Y., Patravali J., Jain S., Humbert O., Jodoin P. M., IEEE Trans. Med. Imaging
  • [5] Karimi D., Dou H., Warfield S. K., Gholipour A., Med. Image Anal, 65, (2020)
  • [6] McSweeney D. M., Henderson E. G., van Herk M., Weaver J., Bromiley P. A., Green A., McWilliam A., Med. Phys, 49, pp. 3107-3120, (2022)
  • [7] Ma J., Bao L., Lou Q., Kong D., Int. J. Comput. Assist. Radiol. Surg, 17, pp. 363-372, (2022)
  • [8] Zoetmulder R., Gavves E., Caan M., Marquering H., Comput. Methods Programs Biomed, 214, (2022)
  • [9] Yosinski J., Clune J., Bengio Y., Lipson H., How transferable are features in deep neural networks?, Advances in Neural Information Processing Systems, 27, (2014)
  • [10] Donahue J., Yangqing J., Vinyals O., Hoffman J., Zhang N., Tzeng E., Darrell T., DeCAF: A deep convolutional activation feature for generic visual recognition, Proc. 31st Int’l. Conf. on Machine Learning (PMLR, 32, pp. 647-655, (2014)