Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs)

被引:116
作者
Bi, Lei [1 ]
Kim, Jinman [1 ]
Kumar, Ashnil [1 ]
Feng, Dagan [1 ,2 ]
Fulham, Michael [1 ,3 ,4 ]
机构
[1] Univ Sydney, Sch Informat Technol, Sydney, NSW, Australia
[2] Shanghai Jiao Tong Univ, Med X Res Inst, Shanghai, Peoples R China
[3] Royal Prince Alfred Hosp, Dept Mol Imaging, Sydney, NSW, Australia
[4] Univ Sydney, Sydney Med Sch, Sydney, NSW, Australia
来源
MOLECULAR IMAGING, RECONSTRUCTION AND ANALYSIS OF MOVING BODY ORGANS, AND STROKE IMAGING AND TREATMENT | 2017年 / 10555卷
关键词
Positron Emission Tomography (PET); Generative Adversarial Networks (GANs); Image synthesis; SEGMENTATION; TUMOR; RECONSTRUCTION; CLASSIFICATION; DELINEATION;
D O I
10.1007/978-3-319-67564-0_5
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Positron emission tomography (PET) imaging is widely used for staging and monitoring treatment in a variety of cancers including the lymphomas and lung cancer. Recently, there has been a marked increase in the accuracy and robustness of machine learning methods and their application to computer-aided diagnosis (CAD) systems, e.g., the automated detection and quantification of abnormalities in medical images. Successful machine learning methods require large amounts of training data and hence, synthesis of PET images could play an important role in enhancing training data and ultimately improve the accuracy of PET-based CAD systems. Existing approaches such as atlas-based or methods that are based on simulated or physical phantoms have problems in synthesizing the low resolution and low signal-to-noise ratios inherent in PET images. In addition, these methods usually have limited capacity to produce a variety of synthetic PET images with large anatomical and functional differences. Hence, we propose a new method to synthesize PET data via multi-channel generative adversarial networks (M-GAN) to address these limitations. Our M-GAN approach, in contrast to the existing medical image synthetic methods that rely on using low-level features, has the ability to capture feature representations with a high-level of semantic information based on the adversarial learning concept. Our M-GAN is also able to take the input from the annotation (label) to synthesize regions of high uptake e.g., tumors and from the computed tomography (CT) images to constrain the appearance consistency based on the CT derived anatomical information in a single framework and output the synthetic PET images directly. Our experimental data from 50 lung cancer PET-CT studies show that our method provides more realistic PET images compared to conventional GAN methods. Further, the PET tumor detection model, trained with our synthetic PET data, performed competitively when compared to the detection model trained with real PET data (2.79% lower in terms of recall). We suggest that our approach when used in combination with real and synthetic images, boosts the training data for machine learning methods.
引用
收藏
页码:43 / 51
页数:9
相关论文
共 25 条
[1]  
[Anonymous], 2015, COMPUTER VISION PATT
[2]  
[Anonymous], 2016, NEURAL INFORM PROCES
[3]  
[Anonymous], 2016, ARXIV161207828
[4]  
[Anonymous], EPFLREPORT82802 ID
[5]  
[Anonymous], 2017, COMPUTER VISION PATT
[6]   Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies [J].
Bi, Lei ;
Kim, Jinman ;
Kumar, Ashnil ;
Wen, Lingfeng ;
Feng, Dagan ;
Fulham, Michael .
COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2017, 60 :3-10
[7]   Dermoscopic Image Segmentation via Multistage Fully Convolutional Networks [J].
Bi, Lei ;
Kim, Jinman ;
Ahn, Euijoon ;
Kumar, Ashnil ;
Fulham, Michael ;
Feng, Dagan .
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2017, 64 (09) :2065-2074
[8]   Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation [J].
Bi, Lei ;
Kim, Jinman ;
Kumar, Ashnil ;
Fulham, Michael ;
Feng, Dagan .
VISUAL COMPUTER, 2017, 33 (6-8) :1061-1071
[9]   Attenuation Correction Synthesis for Hybrid PET-MR Scanners: Application to Brain Studies [J].
Burgos, Ninon ;
Cardoso, M. Jorge ;
Thielemans, Kris ;
Modat, Marc ;
Pedemonte, Stefano ;
Dickson, John ;
Barnes, Anna ;
Ahmed, Rebekah ;
Mahoney, Colin J. ;
Schott, Jonathan M. ;
Duncan, John S. ;
Atkinson, David ;
Arridge, Simon R. ;
Hutton, Brian F. ;
Ourselin, Sebastien .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2014, 33 (12) :2332-2341
[10]   DCAN: Deep contour-aware networks for object instance segmentation from histology images [J].
Chen, Hao ;
Qi, Xiaojuan ;
Yu, Lequan ;
Dou, Qi ;
Qin, Jing ;
Heng, Pheng-Ann .
MEDICAL IMAGE ANALYSIS, 2017, 36 :135-146