MMTLNet: Multi-Modality Transfer Learning Network with adversarial training for 3D whole heart segmentation

被引:0
作者
Liao X. [1 ]
Qian Y. [1 ]
Chen Y. [1 ]
Xiong X. [2 ]
Wang Q. [1 ]
Heng P.-A. [1 ,3 ]
机构
[1] Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
[2] Department of Medical Imaging, Zhongnan Hospital of Wuhan University
[3] Department of Computer Science and Engineering, The Chinese University of Hong Kong
来源
Comput. Med. Imaging Graph. | 2020年
基金
中国国家自然科学基金;
关键词
Deep learning; Multi-modality whole heart segmentation; Transfer learning;
D O I
10.1016/j.compmedimag.2020.101785
中图分类号
学科分类号
摘要
The accurate whole heart segmentation (WHS) of multi-modality medical images including magnetic resonance image (MRI) and computed tomography (CT) plays an important role in many clinical applications, such as accurate preoperative diagnosis planning and intraoperative treatment. Considering that the shape information of each component of the whole heart is complementary, we can extract multi-modality features and obtain the final segmentation results by fusing MRI and CT images. In this paper, we proposed a multi-modality transfer learning network with adversarial training (MMTLNet) for 3D multi-modality whole heart segmentation. Firstly, the network transfers the source domain (MRI domain) to the target domain (CT domain) by reconstructing the MRI images with a generator network and optimizing the reconstructed MRI images with a discriminator network, which enables us to fuse the MRI images with CT images to fully utilize the useful information from images in multi-modality for segmentation task. Secondly, to retain the useful information and remove the redundant information for accurate segmentation, we introduce the spatial attention mechanism into the backbone connection of UNet network to optimize the feature extraction between layers, and add channel attention mechanism at the jump connection to optimize the information extracted from the low-level feature map. Thirdly, we propose a new loss function in the adversarial training by introducing a weighted coefficient to distribute the proportion between Dice coefficient loss and generator loss, which can not only ensure the images to be correctly transferred from MRI domain to CT domain, but also achieve accurate segmentation with the transferred domain. We extensively evaluated our method on the data set of the multi-modality whole heart segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017. The dice values of whole heart segmentation are 0.914 (CT images) and 0.890 (MRI images), which are both higher than the state-of-the-art. © 2020 Elsevier Ltd
引用
收藏
相关论文
共 57 条
  • [41] Wang W., Ye C., Zhang S., Xu Y., Wang K., Improving whole-heart CT image segmentation by attention mechanism, IEEE Access, 8, pp. 14579-14587, (2019)
  • [42] Wolterink J.M., Leiner T., Viergever M.A., Isgum I., Dilated convolutional neural networks for cardiovascular MR segmentation in congenital heart disease, Reconstruction, Segmentation, and Analysis of Medical Images, pp. 95-102, (2016)
  • [43] Yang X., Bian C., Yu L., Ni D., Heng P.-A., 3D convolutional networks for fully automatic fine-grained whole heart partition, International Workshop on Statistical Atlases and Computational Models of the Heart, pp. 181-189, (2017)
  • [44] Yang X., Bian C., Yu L., Ni D., Heng P.-A., Hybrid loss guided convolutional networks for whole heart parsing, International Workshop on Statistical Atlases and Computational Models of the Heart, pp. 215-223, (2017)
  • [45] Yang G., Sun C., Chen Y., Tang L., Shu H., Dillenseger J.-L., Automatic whole heart segmentation in CT images based on multi-atlas image registration, International Workshop on Statistical Atlases and Computational Models of the Heart, pp. 250-257, (2017)
  • [46] Ye C., Wang W., Zhang S., Wang K., Multi-depth fusion network for whole-heart CT image segmentation, IEEE Access, 7, pp. 23421-23429, (2019)
  • [47] Yu L., Cheng J.-Z., Dou Q., Yang X., Chen H., Qin J., Heng P.-A., Automatic 3D cardiovascular MR segmentation with densely-connected volumetric convnets, International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 287-295, (2017)
  • [48] Yu C., Gao Z., Zhang W., Yang G., Zhao S., Zhang H., Zhang Y., Li S., Multitask learning for estimating multitype cardiac indices in MRI and CT based on adversarial reverse mapping, IEEE Trans. Neural Netw. Learn. Syst., (2020)
  • [49] Yu F., Koltun V., Funkhouser T., pp. 472-480
  • [50] Yu C., Wang J., Peng C., Gao C., Yu G., Sang N., pp. 1857-1866, (2018)