Disentangled representation learning in cardiac image analysis

被引:117
作者
Chartsias, Agisilaos [1 ]
Joyce, Thomas [1 ]
Papanastasiou, Giorgos [2 ,3 ]
Semple, Scott [2 ,3 ]
Williams, Michelle [2 ,3 ]
Newby, David E. [2 ,3 ]
Dharmakumar, Rohan [4 ]
Tsaftaris, Sotirios A. [1 ,5 ]
机构
[1] Univ Edinburgh, Inst Digital Commun, Sch Engn, West Mains Rd, Edinburgh EH9 3FB, Midlothian, Scotland
[2] Edinburgh Imaging Facil QMRI, Edinburgh EH16 4TJ, Midlothian, Scotland
[3] Ctr Cardiovasc Sci, Edinburgh EH16 4TJ, Midlothian, Scotland
[4] Cedars Sinai Med Ctr, Los Angeles, CA 90048 USA
[5] Alan Turing Inst, London, England
基金
英国工程与自然科学研究理事会; 美国国家卫生研究院;
关键词
Disentangled representation learning; Cardiac magnetic resonance imaging; Semi-supervised segmentation; Multitask learning; WHOLE HEART SEGMENTATION;
D O I
10.1016/j.media.2019.101535
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Typically, a medical image offers spatial information on the anatomy (and pathology) modulated by imaging specific characteristics. Many imaging modalities including Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) can be interpreted in this way. We can venture further and consider that a medical image naturally factors into some spatial factors depicting anatomy and factors that denote the imaging characteristics. Here, we explicitly learn this decomposed (disentangled) representation of imaging data, focusing in particular on cardiac images. We propose Spatial Decomposition Network (SDNet), which factorises 2D medical images into spatial anatomical factors and non-spatial modality factors. We demonstrate that this high-level representation is ideally suited for several medical image analysis tasks, such as semi-supervised segmentation, multi-task segmentation and regression, and image-to-image synthesis. Specifically, we show that our model can match the performance of fully supervised segmentation models, using only a fraction of the labelled images. Critically, we show that our factorised representation also benefits from supervision obtained either when we use auxiliary tasks to train the model in a multi-task setting (e.g. regressing to known cardiac indices), or when aggregating multimodal data from different sources (e.g. pooling together MRI and CT data). To explore the properties of the learned factorisation, we perform latent-space arithmetic and show that we can synthesise CT from MR and vice versa, by swapping the modality factors. We also demonstrate that the factor holding image specific information can be used to predict the input modality with high accuracy. Code will be made available at https://github.comiagis85/anatomy_modality_decomposition. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 49 条
  • [1] Almahairi A, 2018, PR MACH LEARN RES, V80
  • [2] [Anonymous], 2016, P IEEE C COMPUTER VI
  • [3] [Anonymous], 2016, 4 INT C LEARN REPR I
  • [4] [Anonymous], 2018, INT C LEARN REPR
  • [5] [Anonymous], 2014, ADAM METHOD STOCHAST
  • [6] [Anonymous], 2013, CoRR abs/1308.3432
  • [7] [Anonymous], NIPS WORKSH LEARN DI
  • [8] [Anonymous], INT C LEARN REPR WOR
  • [9] [Anonymous], 2018, LECT NOTES COMPUT SC, DOI [10.1007/978-3-030-01219-9_11, DOI 10.1007/978-3-030-01219-9_11]
  • [10] Azadi S., 2018, IEEE C COMP VIS PATT