MMTLNet: Multi-Modality Transfer Learning Network with adversarial training for 3D whole heart segmentation

被引:0
作者
Liao X. [1 ]
Qian Y. [1 ]
Chen Y. [1 ]
Xiong X. [2 ]
Wang Q. [1 ]
Heng P.-A. [1 ,3 ]
机构
[1] Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
[2] Department of Medical Imaging, Zhongnan Hospital of Wuhan University
[3] Department of Computer Science and Engineering, The Chinese University of Hong Kong
来源
Comput. Med. Imaging Graph. | 2020年
基金
中国国家自然科学基金;
关键词
Deep learning; Multi-modality whole heart segmentation; Transfer learning;
D O I
10.1016/j.compmedimag.2020.101785
中图分类号
学科分类号
摘要
The accurate whole heart segmentation (WHS) of multi-modality medical images including magnetic resonance image (MRI) and computed tomography (CT) plays an important role in many clinical applications, such as accurate preoperative diagnosis planning and intraoperative treatment. Considering that the shape information of each component of the whole heart is complementary, we can extract multi-modality features and obtain the final segmentation results by fusing MRI and CT images. In this paper, we proposed a multi-modality transfer learning network with adversarial training (MMTLNet) for 3D multi-modality whole heart segmentation. Firstly, the network transfers the source domain (MRI domain) to the target domain (CT domain) by reconstructing the MRI images with a generator network and optimizing the reconstructed MRI images with a discriminator network, which enables us to fuse the MRI images with CT images to fully utilize the useful information from images in multi-modality for segmentation task. Secondly, to retain the useful information and remove the redundant information for accurate segmentation, we introduce the spatial attention mechanism into the backbone connection of UNet network to optimize the feature extraction between layers, and add channel attention mechanism at the jump connection to optimize the information extracted from the low-level feature map. Thirdly, we propose a new loss function in the adversarial training by introducing a weighted coefficient to distribute the proportion between Dice coefficient loss and generator loss, which can not only ensure the images to be correctly transferred from MRI domain to CT domain, but also achieve accurate segmentation with the transferred domain. We extensively evaluated our method on the data set of the multi-modality whole heart segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017. The dice values of whole heart segmentation are 0.914 (CT images) and 0.890 (MRI images), which are both higher than the state-of-the-art. © 2020 Elsevier Ltd
引用
收藏
相关论文
共 57 条
  • [1] Avendi M., Kheradvar A., Jafarkhani H., A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI, Med. Image Anal., 30, pp. 108-119, (2016)
  • [2] Blendowski M., Bouteldja N., Heinrich M.P., Multimodal 3D medical image registration guided by shape encoder–decoder networks, Int. J. Comput. Assist. Radiol. Surg., 15, 2, pp. 269-276, (2020)
  • [3] Chartsias A., Papanastasiou G., Wang C., Semple S., Newby D., Dharmakumar R., Tsaftaris S.A., Disentangle, align and fuse for multimodal and zero-shot image segmentation, (2019)
  • [4] Cho K., Courville A., Bengio Y., Describing multimedia content using attention-based encoder-decoder networks, IEEE Trans. Multimed., 17, 11, pp. 1875-1886, (2015)
  • [5] Dormer J.D., Ma L., Halicek M., Reilly C.M., Schreibmann E., Fei B., Heart chamber segmentation from CT using convolutional neural networks, Medical Imaging 2018: Biomedical Applications in Molecular, Structural, and Functional Imaging, Vol. 10578, (2018)
  • [6] Dou Q., Ouyang C., Chen C., Chen H., Glocker B., Zhuang X., Heng P.-A., Pnp-adanet: Plug-and-play adversarial domain adaptation network with a benchmark at cross-modality cardiac segmentation, (2018)
  • [7] Dou Q., Ouyang C., Chen C., Chen H., Heng P.-A., Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss, (2018)
  • [8] Farag A., Lu L., Roth H.R., Liu J., Turkbey E., Summers R.M., A bottom-up approach for pancreas segmentation using cascaded superpixels and (deep) image patch labeling, IEEE Trans. Image Process., 26, 1, pp. 386-399, (2016)
  • [9] Galisot G., Brouard T., Ramel J.-Y., Local probabilistic atlases and a posteriori correction for the segmentation of heart images, International Workshop on Statistical Atlases and Computational Models of the Heart, pp. 207-214, (2017)
  • [10] Ganin Y., Ustinova E., Ajakan H., Germain P., Larochelle H., Laviolette F., Marchand M., Lempitsky V., Domain-adversarial training of neural networks, Domain Adaptation in Computer Vision Applications, pp. 189-209, (2017)