One Model to Synthesize Them All: Multi-Contrast Multi-Scale Transformer for Missing Data Imputation

被引:23
|
作者
Liu, Jiang [1 ]
Pasumarthi, Srivathsa [2 ]
Duffy, Ben [2 ]
Gong, Enhao [2 ]
Datta, Keshav [2 ]
Zaharchuk, Greg [3 ]
机构
[1] Johns Hopkins Univ, Dept Elect & Comp Engn, Baltimore, MD 21218 USA
[2] Subtle Med Inc, Menlo Pk, CA 94025 USA
[3] Stanford Univ, Dept Radiol, Stanford, CA 94305 USA
关键词
Transformers; Magnetic resonance imaging; Decoding; Convolutional neural networks; Image synthesis; Data models; Generative adversarial networks; Vision transformer; deep learning; image synthesis; missing data imputation; multi-contrast MRI; IMAGE SYNTHESIS; MRI;
D O I
10.1109/TMI.2023.3261707
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.
引用
收藏
页码:2577 / 2591
页数:15
相关论文
共 26 条
  • [1] One model to unite them all: Personalized federated learning of multi-contrast MRI synthesis
    Dalmaz, Onat
    Mirza, Muhammad U.
    Elmas, Gokberk
    Ozbey, Muzaffer
    Dar, Salman U. H.
    Ceyani, Emir
    Oguz, Kader K.
    Avestimehr, Salman
    Cukur, Tolga
    MEDICAL IMAGE ANALYSIS, 2024, 94
  • [2] Multi-scale deformable transformer for multi-contrast knee MRI super-resolution
    Zou, Beiji
    Ji, Zexin
    Zhu, Chengzhang
    Dai, Yulan
    Zhang, Wensheng
    Kui, Xiaoyan
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 79
  • [3] Synthesize High-Quality Multi-Contrast Magnetic Resonance Imaging From Multi-Echo Acquisition Using Multi-Task Deep Generative Model
    Wang, Guanhua
    Gong, Enhao
    Banerjee, Suchandrima
    Martin, Dann
    Tong, Elizabeth
    Choi, Ja
    Chen, Huijun
    Wintermark, Max
    Pauly, John M.
    Zaharchuk, Greg
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (10) : 3089 - 3099
  • [4] Multi-contrast multi-scale surface registration for improved alignment of cortical areas
    Tardif, Christine Lucas
    Schaefer, Andreas
    Waehnert, Miriam
    Dinse, Juliane
    Turner, Robert
    Bazin, Pierre-Louis
    NEUROIMAGE, 2015, 111 : 107 - 122
  • [5] MD-GraphFormer: A Model-Driven Graph Transformer for Fast Multi-Contrast MR Imaging
    Wang, Jiazhen
    Yang, Yan
    Yang, Heran
    Lian, Chunfeng
    Xu, Zongben
    Sun, Jian
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2023, 9 : 1018 - 1030
  • [6] Data-efficient multi-scale fusion vision transformer
    Tang, Hao
    Liu, Dawei
    Shen, Chengchao
    PATTERN RECOGNITION, 2025, 161
  • [7] Seismic Data Interpolation Based on Multi-Scale Transformer
    Guo, Yuanqi
    Fu, Lihua
    Li, Hongwei
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [8] Scope-Free Global Multi-Condition-Aware Industrial Missing Data Imputation Framework via Diffusion Transformer
    Liu, Diju
    Wang, Yalin
    Liu, Chenliang
    Yuan, Xiaofeng
    Wang, Kai
    Yang, Chunhua
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6977 - 6988
  • [9] MAPANet: A Multi-Scale Attention-Guided Progressive Aggregation Network for Multi-Contrast MRI Super-Resolution
    Liu, Licheng
    Liu, Tao
    Zhou, Wei
    Wang, Yaonan
    Liu, Min
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2024, 10 : 928 - 940
  • [10] A novel multi-scale network intrusion detection model with transformer
    Xi, Chiming
    Wang, Hui
    Wang, Xubin
    SCIENTIFIC REPORTS, 2024, 14 (01):