MRI Cross-Modality Image-to-Image Translation

被引:79
作者
Yang, Qianye [1 ,2 ,3 ]
Li, Nannan [1 ,2 ,3 ,6 ]
Zhao, Zixu [1 ,2 ,3 ]
Fan, Xingyu [4 ]
Chang, Eric I-Chao [5 ]
Xu, Yan [1 ,2 ,3 ,5 ]
机构
[1] Beihang Univ, Beihang Univ Shenzhen, Beijing Adv Innovat Ctr Biomed Engn, State Key Lab Software Dev Environm,Minist Educ, Beijing 100191, Peoples R China
[2] Beihang Univ, Beihang Univ Shenzhen, Beijing Adv Innovat Ctr Biomed Engn, Key Lab Biomech & Mechanobiol,Minist Educ, Beijing 100191, Peoples R China
[3] Beihang Univ, Beihang Univ Shenzhen, Beijing Adv Innovat Ctr Biomed Engn, Res Inst, Beijing 100191, Peoples R China
[4] Chongqing Univ, Bioengn Coll, Chongqing 400044, Peoples R China
[5] Microsoft Res Asia, Beijing 100080, Peoples R China
[6] Ping Technol Shenzhen Co Ltd, Shanghai 200030, Peoples R China
基金
北京市自然科学基金;
关键词
SEGMENTATION; REGISTRATION; FRAMEWORK;
D O I
10.1038/s41598-020-60520-6
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
We present a cross-modality generation framework that learns to generate translated modalities from given modalities in MR images. Our proposed method performs Image Modality Translation (abbreviated as IMT) by means of a deep learning model that leverages conditional generative adversarial networks (cGANs). Our framework jointly exploits the low-level features (pixel-wise information) and high-level representations (e.g. brain tumors, brain structure like gray matter, etc.) between cross modalities which are important for resolving the challenging complexity in brain structures. Our framework can serve as an auxiliary method in medical use and has great application potential. Based on our proposed framework, we first propose a method for cross-modality registration by fusing the deformation fields to adopt the cross-modality information from translated modalities. Second, we propose an approach for MRI segmentation, translated multichannel segmentation (TMS), where given modalities, along with translated modalities, are segmented by fully convolutional networks (FCN) in a multichannel manner. Both of these two methods successfully adopt the cross-modality information to improve the performance without adding any extra data. Experiments demonstrate that our proposed framework advances the state-of-the-art on five brain MRI datasets. We also observe encouraging results in cross-modality registration and segmentation on some widely adopted brain datasets. Overall, our work can serve as an auxiliary method in medical use and be applied to various tasks in medical fields.
引用
收藏
页数:18
相关论文
共 65 条
  • [1] [Anonymous], 2016, ACM T GRAPHIC, DOI DOI 10.1145/2897824.2925974
  • [2] [Anonymous], 2016, LECT NOTES COMPUT SC, DOI DOI 10.1007/978-3-319-46487-9_40
  • [3] [Anonymous], 2016, LECT NOTES COMPUT SC, DOI DOI 10.1007/978-3-319-46493-0_35
  • [4] [Anonymous], 2017, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2017.613
  • [5] [Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.90
  • [6] [Anonymous], 2009, NEUROIMAGE, DOI DOI 10.1016/J.NEUROIMAGE.2008.12.037
  • [7] [Anonymous], 2016, IEEE T MED IMAGING, DOI DOI 10.1109/TMI.2016.2528129
  • [8] [Anonymous], 2010, J MAGN RESON IMAGING, DOI DOI 10.1002/JMRI.22239
  • [9] [Anonymous], 2018, IEEE ACCESS, DOI DOI 10.1109/ACCESS.2018.2858196
  • [10] [Anonymous], 2013, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2012.143