Deep Generative Adversarial Networks for Thin-Section Infant Image Reconstruction

被引:16
作者
Gu, Jiaqi [1 ]
Li, Zeju [1 ]
Wang, Yuanyuan [1 ,3 ]
Yang, Haowei [2 ]
Qiao, Zhongwei [2 ]
Yu, Jinhua [1 ,3 ]
机构
[1] Fudan Univ, Sch Informat Sci & Technol, Shanghai 200433, Peoples R China
[2] Fudan Univ, Childrens Hosp, Shanghai 201102, Peoples R China
[3] Fudan Univ, Inst Funct & Mol Med Imaging, Dept Elect Engn, Key Lab Med Imaging Comp & Comp Assisted Interven, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; infant magnetic resonance (MR) images; super-resolution reconstruction; thick-section; thin-section; SUPERRESOLUTION;
D O I
10.1109/ACCESS.2019.2918926
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to their high spatial resolution, thin-section magnetic resonance (MR) images serve as ideal medical images for brain structure investigation and brain surgery navigation. However, compared with the clinically widely used thick-section MR images, thin-section MR images are less available due to the imaging cost. Thin-section MR images of infants are even scarcer but are quite valuable for the study of human brain development. Therefore, we propose a method for the reconstruction of thin-section MR images from thick-section images. A two-stage reconstruction framework based on generative adversarial networks (GANs) and a convolutional neural network (CNN) is proposed to reconstruct thin-section MR images from thick-section images in the axial and sagittal planes. A 3D-Y-Net-GAN is first proposed to fuse MR images from the axial and sagittal planes and to achieve the first-stage thin-section reconstruction. A 3D-DenseUNet followed by a stack of enhanced residual blocks is then proposed to provide further detail recalibrations and structural corrections in the sagittal plane. In this method, a comprehensive loss function is also proposed to help the networks capture more structural details. The reconstruction performance of the proposed method is compared with bicubic interpolation, sparse representation, and 3D-SRU-Net. Cross-validation based on 35 cases and independent testing based on two datasets with totally 114 cases reveal that, compared with the other three methods, the proposed method provides an average 23.5% improvement in peak signal-to-noise ratio (PSNR), 90.5% improvement in structural similarity (SSIM), and 21.5% improvement in normalized mutual information (NMI). The quantitative evaluation and visual inspection demonstrate that our proposed method outperforms those methods by reconstructing more realistic results with better structural details.
引用
收藏
页码:68290 / 68304
页数:15
相关论文
共 30 条
[1]  
[Anonymous], ARXIV170106264V6
[2]   The reproducibility and sensitivity of brain tissue volume measurements derived from an SPM-based segmentation methodology [J].
Chard, DT ;
Parker, GJM ;
Griffin, CMB ;
Thompson, AJ ;
Miller, DH .
JOURNAL OF MAGNETIC RESONANCE IMAGING, 2002, 15 (03) :259-267
[3]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[4]  
Heinrich Larissa, 2017, Medical Image Computing and Computer-Assisted Intervention, MICCAI 2017. 20th International Conference. Proceedings: LNCS 10434, P135, DOI 10.1007/978-3-319-66185-8_16
[5]  
Huang G, 2017, PROC CVPR IEEE, P4700, DOI [DOI 10.1109/CVPR.2017.243, 10.1109/CVPR.2017.243]
[6]   CUBIC CONVOLUTION INTERPOLATION FOR DIGITAL IMAGE-PROCESSING [J].
KEYS, RG .
IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1981, 29 (06) :1153-1160
[7]  
Kim J, 2016, PROC CVPR IEEE, P1637, DOI [10.1109/CVPR.2016.181, 10.1109/CVPR.2016.182]
[8]  
Kingma DP, 2014, ARXIV
[9]  
LAI WS, 2017, PROC CVPR IEEE, P5835, DOI [DOI 10.1109/CVPR.2017.618, 10.1109/cvpr.2017.618]
[10]   Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network [J].
Ledig, Christian ;
Theis, Lucas ;
Huszar, Ferenc ;
Caballero, Jose ;
Cunningham, Andrew ;
Acosta, Alejandro ;
Aitken, Andrew ;
Tejani, Alykhan ;
Totz, Johannes ;
Wang, Zehan ;
Shi, Wenzhe .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :105-114