Original Article Synthesis of magnetic resonance images from computed tomography data using convolutional neural network with contextual loss function

被引:12
作者
Li, Zhaotong [1 ,2 ]
Huang, Xinrui [3 ]
Zhang, Zeru [1 ,2 ]
Liu, Liangyou [1 ,2 ]
Wang, Fei [4 ]
Li, Sha [4 ]
Gao, Song [1 ]
Xia, Jun [5 ]
机构
[1] Peking Univ Hlth Sci Ctr, Inst Med Technol, 38 Xueyuan Rd, Beijing, Peoples R China
[2] Peking Univ, Inst Med Humanities, Beijing, Peoples R China
[3] Peking Univ, Sch Basic Med Sci, Dept Biochem & Biophys, Beijing, Peoples R China
[4] Peking Univ Canc Hosp & Inst, Key Lab Carcinogenesis & Translat Res, Minist Educ Beijing, Beijing Canc Hosp & Inst,Dept Radiat Oncol, Beijing, Peoples R China
[5] Shenzhen Univ, Shenzhen Peoples Hosp 2, Hlth Sci Ctr, Dept Radiol,Affiliated Hosp 1, 3002 Sungang West Rd, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Synthesis of magnetic resonance imaging (synthesis of MRI); radiotherapy treatment planning system (radiotherapy TPS); U-Net; ResNet; contextual loss; MRI; CT;
D O I
10.21037/qims-21-846
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: Magnetic resonance imaging (MRI) images synthesized from computed tomography (CT) data can provide more detailed information on pathological structures than that of CT data alone; thus, the synthesis of MRI has received increased attention especially in medical scenarios where only CT images are available. A novel convolutional neural network (CNN) combined with a contextual loss function was proposed for synthesis of T1- and T2-weighted images (T1WI and T2WI) from CT data. Methods: A total of 5,053 and 5,081 slices of TIWI and T2WI, respectively were selected for the dataset of CT and MRI image pairs. Affine registration, image denoising, and contrast enhancement were done on the aforementioned multi-modality medical image dataset comprising TIWI, T2WI, and CT images of the brain. A deep CNN was then proposed by modifying the ResNet structure to constitute the encoder and decoder of U-Net, called double ResNet-U-Net (DRUNet). Three different loss functions were utilized to optimize the parameters of the proposed models: mean squared error ('VISE) loss, binary crossentropy (BCE) loss, and contextual loss. Statistical analysis of the independent-sample t-test was conducted by comparing DRUNets with different loss functions and different network layers. Results: DRUNet-101 with contextual loss yielded higher values of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and Tenengrad function (i.e., 34.25 +/- 2.06, 0.97 +/- 0.03, and 17.03 +/- 2.75 for T1WI and 33.50 +/- 1.08, 0.98 +/- 0.05, and 19.76 +/- 3.54 for T2WI respectively). The results were statistically significant at P<0.001 with a narrow confidence interval of difference, indicating the superiority of DRUNet-101 with contextual loss. In addition, both image zooming and difference maps presented for the final synthetic MR images visually reflected the robustness of DRUNet-101 with contextual loss. The visualization of convolution filters and feature maps showed that the proposed model can generate synthetic MR images with high-frequency information. Conclusions: The results demonstrated that DRUNet-101 with contextual loss function provided better high-frequency information in synthetic MR images compared with the other two functions. The proposed DRUNet model has a distinct advantage over previous models in terms of PSNR, SSIM, and Tenengrad score. Overall, DRUNet-101 with contextual loss is recommended for synthesizing MR images from CT scans.
引用
收藏
页码:3151 / 3169
页数:19
相关论文
共 49 条
  • [1] MRI AND CT EVALUATION OF PRIMARY BONE AND SOFT-TISSUE TUMORS
    AISEN, AM
    MARTEL, W
    BRAUNSTEIN, EM
    MCMILLIN, KI
    PHILLIPS, WA
    KLING, TF
    [J]. AMERICAN JOURNAL OF ROENTGENOLOGY, 1986, 146 (04) : 749 - 756
  • [2] Recurrent residual U-Net for medical image segmentation
    Alom, Md Zahangir
    Yakopcic, Chris
    Hasan, Mahmudul
    Taha, Tarek M.
    Asari, Vijayan K.
    [J]. JOURNAL OF MEDICAL IMAGING, 2019, 6 (01)
  • [3] Chaurasia A, 2017, 2017 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)
  • [4] Image super-resolution reconstruction based on feature map attention mechanism
    Chen, Yuantao
    Liu, Linwu
    Phonevilay, Volachith
    Gu, Ke
    Xia, Runlong
    Xie, Jingbo
    Zhang, Qian
    Yang, Kai
    [J]. APPLIED INTELLIGENCE, 2021, 51 (07) : 4367 - 4380
  • [5] Applications of artificial intelligence in nuclear medicine image generation
    Cheng, Zhibiao
    Wen, Junhai
    Huang, Gang
    Yan, Jianhua
    [J]. QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2021, 11 (06) : 2792 - 2822
  • [6] Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks
    Dar, Salman U. H.
    Yurt, Mahmut
    Karacan, Levent
    Erdem, Aykut
    Erdem, Erkut
    Cukur, Tolga
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (10) : 2375 - 2388
  • [7] The role of emergency MRI in the setting of acute abdominal pain
    Ditkofsky N.G.
    Singh A.
    Avery L.
    Novelline R.A.
    [J]. Emergency Radiology, 2014, 21 (6) : 615 - 624
  • [8] An Atlas-Based Electron Density Mapping Method for Magnetic Resonance Imaging (MRI)-Alone Treatment Planning and Adaptive MRI-Based Prostate Radiation Therapy
    Dowling, Jason A.
    Lambert, Jonathan
    Parker, Joel
    Salvado, Olivier
    Fripp, Jurgen
    Capp, Anne
    Wratten, Chris
    Denham, James W.
    Greer, Peter B.
    [J]. INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 2012, 83 (01): : E5 - E11
  • [9] The Importance of Skip Connections in Biomedical Image Segmentation
    Drozdzal, Michal
    Vorontsov, Eugene
    Chartrand, Gabriel
    Kadoury, Samuel
    Pal, Chris
    [J]. DEEP LEARNING AND DATA LABELING FOR MEDICAL APPLICATIONS, 2016, 10008 : 179 - 187
  • [10] Eakins JP, 1996, AUTOMATIC IMAGE CONT, P123