Single-subject Multi-contrast MRI Super-resolution via Implicit Neural Representations

被引:12
作者
McGinnis, Julian [1 ,2 ,3 ]
Shit, Suprosanna [1 ,4 ,5 ]
Li, Hongwei Bran [5 ]
Sideri-Lampretsa, Vasiliki [1 ]
Graf, Robert [1 ,4 ]
Dannecker, Maik [1 ]
Pan, Jiazhen [1 ]
Stolt-Anso, Nil [1 ]
Muehlau, Mark [2 ,3 ]
Kirschke, Jan S. [4 ]
Rueckert, Daniel [1 ,6 ]
Wiestler, Benedikt [4 ]
机构
[1] Tech Univ Munich, Sch Computat Informat & Technol, Munich, Germany
[2] Tech Univ Munich, TUM Neuroimaging Ctr, Munich, Germany
[3] Tech Univ Munich, Dept Neurol, Munich, Germany
[4] Tech Univ Munich, Dept Neuroradiol, Munich, Germany
[5] Univ Zurich, Dept Quantitat Biomed, Zurich, Switzerland
[6] Imperial Coll London, Dept Comp, London, England
来源
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT VIII | 2023年 / 14227卷
基金
欧洲研究理事会;
关键词
Multi-contrast Super-resolution; Implicit Neural Representations; Mutual Information; IMAGE SUPERRESOLUTION; RECONSTRUCTION;
D O I
10.1007/978-3-031-43993-3_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Clinical routine and retrospective cohorts commonly include multi-parametric Magnetic Resonance Imaging; however, they are mostly acquired in different anisotropic 2D views due to signal-to-noise-ratio and scan-time constraints. Thus acquired views suffer from poor out-of-plane resolution and affect downstream volumetric image analysis that typically requires isotropic 3D scans. Combining different views of multi-contrast scans into high-resolution isotropic 3D scans is challenging due to the lack of a large training cohort, which calls for a subject-specific framework. This work proposes a novel solution to this problem leveraging Implicit Neural Representations (INR). Our proposed INR jointly learns two different contrasts of complementary views in a continuous spatial function and benefits from exchanging anatomical information between them. Trained within minutes on a single commodity GPU, our model provides realistic super-resolution across different pairs of contrasts in our experiments with three datasets. Using Mutual Information (MI) as a metric, we find that our model converges to an optimum MI amongst sequences, achieving anatomically faithful reconstruction. (Code is available at: https://github.com/jqmcginnis/multi_contrast_inr/).
引用
收藏
页码:173 / 183
页数:11
相关论文
共 32 条
[1]  
Amiranashvili T, 2022, PR MACH LEARN RES, V172, P22
[2]   Efficient and Accurate MRI Super-Resolution Using a Generative Adversarial Network and 3D Multi-level Densely Connected Network [J].
Chen, Yuhua ;
Shi, Feng ;
Christodoulou, Anthony G. ;
Xie, Yibin ;
Zhou, Zhengwei ;
Li, Debiao .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT I, 2018, 11070 :91-99
[3]  
Chen YH, 2018, I S BIOMED IMAGING, P739
[4]  
Commowick O., 2016, Miccai
[5]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[6]  
Feng CM, 2021, Arxiv, DOI [arXiv:2109.01664, DOI 10.48550/ARXIV.2109.01664]
[7]   Multi-contrast MRI Super-Resolution via a Multi-stage Integration Network [J].
Feng, Chun-Mei ;
Fu, Huazhu ;
Yuan, Shuhao ;
Xu, Yong .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VI, 2021, 12906 :140-149
[8]   Convolutional Neural Networks With Intermediate Loss for 3D Super-Resolution of CT and MRI Scans [J].
Georgescu, Mariana-Iuliana ;
Ionescu, Radu Tudor ;
Verga, Nicolae .
IEEE ACCESS, 2020, 8 :49112-49124
[9]   Robust Super-Resolution Volume Reconstruction From Slice Acquisitions: Application to Fetal Brain MRI [J].
Gholipour, Ali ;
Estroff, Judy A. ;
Warfield, Simon K. .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2010, 29 (10) :1739-1758
[10]   One-Minute Ultrafast Brain MRI With Full Basic Sequences: Can It Be a Promising Way Forward for Pediatric Neuroimaging? [J].
Ha, Ji Young ;
Baek, Hye Jin ;
Ryu, Kyeong Hwa ;
Choi, Bo Hwa ;
Moon, Jin Il ;
Park, Sung Eun ;
Kim, Tae Byeong .
AMERICAN JOURNAL OF ROENTGENOLOGY, 2020, 215 (01) :198-205