PET-Train: Automatic Ground Truth Generation from PET Acquisitions for Urinary Bladder Segmentation in CT Images using Deep Learning

被引:0
作者
Gsaxner, Christina [1 ,2 ,3 ]
Pfarrkirchner, Birgit [1 ,3 ]
Lindner, Lydia [1 ,3 ]
Pepe, Antonio [1 ,3 ]
Roth, Peter M. [1 ]
Egger, Jan [1 ,2 ,3 ]
Gsaxner, Christina [1 ,2 ,3 ]
Wallner, Juergen [2 ,3 ]
Egger, Jan [1 ,2 ,3 ]
Gsaxner, Christina [1 ,2 ,3 ]
Pfarrkirchner, Birgit [1 ,3 ]
Lindner, Lydia [1 ,3 ]
Pepe, Antonio [1 ,3 ]
Wallner, Juergen [2 ,3 ]
Egger, Jan [1 ,2 ,3 ]
机构
[1] Graz Univ Technol, Inst Comp Graph & Vis, Graz, Austria
[2] Med Univ Graz, Dept Maxillofacial Surg, Graz, Austria
[3] Comp Algorithms Med Lab, Graz, Austria
来源
2018 11TH BIOMEDICAL ENGINEERING INTERNATIONAL CONFERENCE (BMEICON 2018) | 2018年
基金
奥地利科学基金会;
关键词
Deep Learning; Medical Imaging; Segmentation; PET/CT; Urinary Bladder;
D O I
暂无
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
In this contribution, we propose an automatic ground truth generation approach that utilizes Positron Emission Tomography (PET) acquisitions to train neural networks for automatic urinary bladder segmentation in Computed Tomography (CT) images. We evaluated different deep learning architectures to segment the urinary bladder. However, deep neural networks require a large amount of training data, which is currently the main bottleneck in the medical field, because ground truth labels have to be created by medical experts on a time-consuming sliceby- slice basis. To overcome this problem, we generate the training data set from the PET data of combined PET/CT acquisitions. This can be achieved by applying simple thresholding to the PET data, where the radiotracer accumulates very distinct in the urinary bladder. However, the ultimate goal is to entirely skip PET imaging and its additional radiation exposure in the future, and only use CT images for segmentation.
引用
收藏
页数:5
相关论文
共 19 条
[1]  
[Anonymous], 2017, MACHINE LEARNING MED
[2]   The Reference Image Database to Evaluate Response to therapy in lung cancer (RIDER) project: A resource for the development of change-analysis software [J].
Armato, S. G., III ;
Meyer, C. R. ;
McNitt-Gray, M. F. ;
McLennan, G. ;
Reeves, A. P. ;
Croft, B. Y. ;
Clarke, L. P. .
CLINICAL PHARMACOLOGY & THERAPEUTICS, 2008, 84 (04) :448-456
[3]  
Bankman I., 2008, HDB MED IMAGE PROCES
[4]   Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets [J].
Cha, Kenny H. ;
Hadjiiski, Lubomir ;
Samala, Ravi K. ;
Chan, Heang-Ping ;
Caoili, Elaine M. ;
Cohan, Richard H. .
MEDICAL PHYSICS, 2016, 43 (04) :1882-1896
[5]  
Cha Kenny H, 2016, SPIE MED IMAGING
[6]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[7]  
Cicek Ozgun, 2016, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. 19th International Conference. Proceedings: LNCS 9901, P424, DOI 10.1007/978-3-319-46723-8_49
[8]  
Costa Maria Jimena, 2007, BIOMEDICAL IMAGING
[9]   Preoperative measurement of aneurysms and stenosis and stent-simulation for endovascular treatment [J].
Egger, Jan ;
Mostarkic, Zvonimir ;
Grossikopf, Stefan ;
Freisleben, Bernd .
2007 4TH IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING : MACRO TO NANO, VOLS 1-3, 2007, :392-+
[10]   HTC Vive MeVisLab integration via OpenVR for medical applications [J].
Egger, Jan ;
Gall, Markus ;
Wallner, Juergen ;
Boechat, Pedro ;
Hann, Alexander ;
Li, Xing ;
Chen, Xiaojun ;
Schmalstieg, Dieter .
PLOS ONE, 2017, 12 (03)