Transformer-empowered Multi-scale Contextual Matching and Aggregation for Multi-contrast MRI Super-resolution

被引:62
作者
Li, Guangyuan [1 ]
Lv, Jun [1 ]
Tian, Yapeng [2 ]
Dou, Qi [3 ]
Wang, Chengyan [4 ]
Xu, Chenliang [2 ]
Qin, Jing [5 ]
机构
[1] Yantai Univ, Sch Comp & Control Engn, Yantai, Peoples R China
[2] Univ Rochester, Rochester, NY 14627 USA
[3] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[4] Fudan Univ, Human Phenome Inst, Shanghai, Peoples R China
[5] Hong Kong Polytech Univ, Ctr Smart Hlth, Sch Nursing, Hong Kong, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
基金
中国国家自然科学基金;
关键词
BRAIN MRI; IMAGE SUPERRESOLUTION; RESOLUTION; ALGORITHM;
D O I
10.1109/CVPR52688.2022.01998
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Magnetic resonance imaging (MRI) can present multi-contrast images of the same anatomical structures, enabling multi-contrast super-resolution (SR) techniques. Com- pared with SR reconstruction using a single-contrast, multi-contrast SR reconstruction is promising to yield SR images with higher quality by leveraging diverse yet complementary information embedded in different imaging modalities. However, existing methods still have two shortcomings: (1) they neglect that the multi-contrast features at different scales contain different anatomical details and hence lack effective mechanisms to match and fuse these features for better reconstruction; and (2) they are still deficient in capturing long-range dependencies, which are essential for the regions with complicated anatomical structures. We propose a novel network to comprehensively address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques; we call it McMRSR. Firstly, we tame transformers to model long-range dependencies in both reference and target images. Then, a new multi-scale contextual matching method is proposed to capture corresponding contexts from reference features at different scales. Furthermore, we introduce a multi-scale aggregation mechanism to gradually and interactively aggregate multi-scale matched features for reconstructing the target SR MR image. Extensive experiments demonstrate that our network outperforms state-of-the-art approaches and has great potential to be applied in clinical practice.
引用
收藏
页码:20604 / 20613
页数:10
相关论文
共 48 条
[41]   End-to-End Learning for Joint Image Demosaicing, Denoising and Super-Resolution [J].
Xing, Wenzhu ;
Egiazarian, Karen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :3506-3515
[42]  
Yu HC, 2017, IEEE IMAGE PROC, P3944, DOI 10.1109/ICIP.2017.8297022
[43]  
Zamir SyedWaqas, 2021, ARXIV211109881
[44]  
Zbontar J., 2018, fastMRI: An Open Dataset and Benchmarks for Accelerated MRI
[45]   Simultaneous single- and multi-contrast super-resolution for brain MRI images based on a convolutional neural network [J].
Zeng, Kun ;
Zheng, Hong ;
Cai, Congbo ;
Yang, Yu ;
Zhang, Kaihua ;
Chen, Zhong .
COMPUTERS IN BIOLOGY AND MEDICINE, 2018, 99 :133-141
[46]   Multi-Contrast Brain MRI Image Super-Resolution With Gradient-Guided Edge Enhancement [J].
Zheng, Hong ;
Zeng, Kun ;
Guo, Di ;
Ying, Jiaxi ;
Yang, Yu ;
Peng, Xi ;
Huang, Feng ;
Chen, Zhong ;
Qu, Xiaobo .
IEEE ACCESS, 2018, 6 :57856-57867
[47]   Multi-contrast brain magnetic resonance image super-resolution using the local weight similarity [J].
Zheng, Hong ;
Qu, Xiaobo ;
Bai, Zhengjian ;
Liu, Yunsong ;
Guo, Di ;
Dong, Jiyang ;
Peng, Xi ;
Chen, Zhong .
BMC MEDICAL IMAGING, 2017, 17
[48]   DuDoRNet: Learning a Dual-Domain Recurrent Network for Fast MRI Reconstruction with Deep T1 Prior [J].
Zhou, Bo ;
Zhou, S. Kevin .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :4272-4281