Contrast Phase Classification with a Generative Adversarial Network

被引:3
作者
Tang, Yucheng [1 ]
Lee, Ho Hin [1 ]
Xu, Yuchen [1 ]
Tang, Olivia [1 ]
Chen, Yunqiang [2 ]
Gao, Dashan [2 ]
Han, Shizhong [2 ]
Gao, Riqiang [1 ]
Bermudez, Camilo [5 ]
Savona, Michael R. [3 ]
Abramson, Richard G. [4 ]
Huo, Yuankai [1 ]
Landman, Bennett A. [1 ,4 ,5 ]
机构
[1] Vanderbilt Univ, Dept Eletr Engn & Comp Sci, Nashville, TN 37212 USA
[2] 12 Sigma Technol, San Diego, CA 92130 USA
[3] Vanderbilt Univ, Hematol & Oncol, Med Ctr, Nashville, TN 37235 USA
[4] Vanderbilt Univ, Radiol, Med Ctr, Nashville, TN 37235 USA
[5] Vanderbilt Univ, Dept Biomed Engn, Nashville, TN 37212 USA
来源
MEDICAL IMAGING 2020: IMAGE PROCESSING | 2021年 / 11313卷
关键词
computed tomography; contrast phase; disentangled representation; GAN; classification; HEPATOCELLULAR-CARCINOMA; INJECTION PROTOCOL; CT;
D O I
10.1117/12.2549438
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Dynamic contrast enhanced computed tomography (CT) is an imaging technique that provides critical information on the relationship of vascular structure and dynamics in the context of underlying anatomy. A key challenge for image processing with contrast enhanced CT is that phase discrepancies are latent in different tissues due to contrast protocols, vascular dynamics, and metabolism variance. Previous studies with deep learning frameworks have been proposed for classifying contrast enhancement with networks inspired by computer vision. Here, we revisit the challenge in the context of whole abdomen contrast enhanced CTs. To capture and compensate for the complex contrast changes, we propose a novel discriminator in the form of a multi-domain disentangled representation learning network. The goal of this network is to learn an intermediate representation that separates contrast enhancement from anatomy and enables classification of images with varying contrast time. Briefly, our unpaired contrast disentangling GAN(CD-GAN) Discriminator follows the ResNet architecture to classify a CT scan from different enhancement phases. To evaluate the approach, we trained the enhancement phase classifier on 21060 slices from two clinical cohorts of 230 subjects. The scans were manually labeled with three independent enhancement phases (non-contrast, portal venous and delayed). Testing was performed on 9100 slices from 30 independent subjects who had been imaged with CT scans from all contrast phases. Performance was quantified in terms of the multi-class normalized confusion matrix. The proposed network significantly improved correspondence over baseline UNet, ResNet50 and StarGAN's performance of accuracy scores 0.54. 0.55, 0.62 and 0.91, respectively (p-value<0.0001 paired t-test for ResNet versus CD-GAN). The proposed discriminator from the disentangled network presents a promising technique that may allow deeper modeling of dynamic imaging against patient specific anatomies.
引用
收藏
页数:8
相关论文
共 21 条
  • [1] [Anonymous], 2019, IEEE T MED IMAGING
  • [2] Effect of contrast injection protocol with dose tailored to patient weight and fixed injection duration on aortic and hepatic enhancement at multidetector-row helical CT
    Awai, K
    Hori, S
    [J]. EUROPEAN RADIOLOGY, 2003, 13 (09) : 2155 - 2160
  • [3] Intravenous Contrast Medium Administration and Scan Timing at CT: Considerations and Approaches
    Bae, Kyongtae T.
    [J]. RADIOLOGY, 2010, 256 (01) : 32 - 61
  • [4] Hepatocellular carcinoma: Evaluation with biphasic, contrast-enhanced, helical CT
    Baron, RL
    Oliver, JH
    Dodd, GD
    Nalesnik, M
    Holbert, BL
    Carr, B
    [J]. RADIOLOGY, 1996, 199 (02) : 505 - 511
  • [5] Chen X, 2016, ADV NEUR IN, V29
  • [6] StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
    Choi, Yunjey
    Choi, Minje
    Kim, Munyoung
    Ha, Jung-Woo
    Kim, Sunghun
    Choo, Jaegul
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8789 - 8797
  • [7] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [8] Goodfellow I., 2020, ADV NEUR IN, V63, P139, DOI [DOI 10.1145/3422622, 10.1145/3422622]
  • [9] Gulrajani I., 2017, P C NEUR INF PROC SY, V30, P1
  • [10] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778