DC-cycleGAN: Bidirectional CT-to-MR synthesis from unpaired data

被引:24
作者
Wang, Jiayuan [1 ]
Wu, Q. M. Jonathan [1 ]
Pourpanah, Farhad [2 ]
机构
[1] Univ Windsor, Dept Elect & Comp Engn, Windsor, ON, Canada
[2] Queens Univ, Dept Elect & Comp Engn, Kingston, ON, Canada
关键词
Medical image synthesis; Generative adversarial network; Cycle consistency loss; Magnetic resonance; Computed tomography images; IMAGE;
D O I
10.1016/j.compmedimag.2023.102249
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Magnetic resonance (MR) and computer tomography (CT) images are two typical types of medical images that provide mutually-complementary information for accurate clinical diagnosis and treatment. However, obtaining both images may be limited due to some considerations such as cost, radiation dose and modality missing. Recently, medical image synthesis has aroused gaining research interest to cope with this limitation. In this paper, we propose a bidirectional learning model, denoted as dual contrast cycleGAN (DC-cycleGAN), to synthesize medical images from unpaired data. Specifically, a dual contrast loss is introduced into the discriminators to indirectly build constraints between real source and synthetic images by taking advantage of samples from the source domain as negative samples and enforce the synthetic images to fall far away from the source domain. In addition, cross-entropy and structural similarity index (SSIM) are integrated into the DC-cycleGAN in order to consider both the luminance and structure of samples when synthesizing images. The experimental results indicate that DC-cycleGAN is able to produce promising results as compared with other cycleGAN-based medical image synthesis methods such as cycleGAN, RegGAN, DualGAN, and NiceGAN. Code is available at https://github.com/JiayuanWang-JW/DC-cycleGAN.
引用
收藏
页数:9
相关论文
共 49 条
[1]   Paired-unpaired Unsupervised Attention Guided GAN with transfer learning for bidirectional brain MR-CT synthesis [J].
Abu-Srhan, Alaa ;
Almallahi, Israa ;
Abushariah, Mohammad A. M. ;
Mahafza, Waleed ;
Al-Kadi, Omar S. .
COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 136
[2]  
Bai Y., 2008, MOD HOSP, V8, P62
[3]   Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs) [J].
Bi, Lei ;
Kim, Jinman ;
Kumar, Ashnil ;
Feng, Dagan ;
Fulham, Michael .
MOLECULAR IMAGING, RECONSTRUCTION AND ANALYSIS OF MOVING BODY ORGANS, AND STROKE IMAGING AND TREATMENT, 2017, 10555 :43-51
[4]  
Chartsias Agisilaos, 2017, Simulation and Synthesis in Medical Imaging. Second International Workshop, SASHIMI 2017. Held in Conjunction with MICCAI 2017. Proceedings: LNCS 10557, P3, DOI 10.1007/978-3-319-68127-6_1
[5]   Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation [J].
Chen, Runfa ;
Huang, Wenbing ;
Huang, Binghui ;
Sun, Fuchun ;
Fang, Bin .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :8165-8174
[6]   Technical Note: U-net-generated synthetic CT images for magnetic resonance imaging-only prostate intensity-modulated radiation therapy treatment planning [J].
Chen, Shupeng ;
Qin, An ;
Zhou, Dingyi ;
Yan, Di .
MEDICAL PHYSICS, 2018, 45 (12) :5659-5665
[7]  
Fu J., 2018, Male pelvic synthetic CT generation from T1weighted MRI using 2D and 3D convolutional neural networks
[8]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[9]   MR-based synthetic CT generation using a deep convolutional neural network method [J].
Han, Xiao .
MEDICAL PHYSICS, 2017, 44 (04) :1408-1419
[10]   MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration [J].
Heinrich, Mattias P. ;
Jenkinson, Mark ;
Bhushan, Manav ;
Matin, Tahreema ;
Gleeson, Fergus V. ;
Brady, Sir Michael ;
Schnabel, Julia A. .
MEDICAL IMAGE ANALYSIS, 2012, 16 (07) :1423-1435