Transformer-empowered Multi-scale Contextual Matching and Aggregation for Multi-contrast MRI Super-resolution

被引:62
作者
Li, Guangyuan [1 ]
Lv, Jun [1 ]
Tian, Yapeng [2 ]
Dou, Qi [3 ]
Wang, Chengyan [4 ]
Xu, Chenliang [2 ]
Qin, Jing [5 ]
机构
[1] Yantai Univ, Sch Comp & Control Engn, Yantai, Peoples R China
[2] Univ Rochester, Rochester, NY 14627 USA
[3] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[4] Fudan Univ, Human Phenome Inst, Shanghai, Peoples R China
[5] Hong Kong Polytech Univ, Ctr Smart Hlth, Sch Nursing, Hong Kong, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
基金
中国国家自然科学基金;
关键词
BRAIN MRI; IMAGE SUPERRESOLUTION; RESOLUTION; ALGORITHM;
D O I
10.1109/CVPR52688.2022.01998
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Magnetic resonance imaging (MRI) can present multi-contrast images of the same anatomical structures, enabling multi-contrast super-resolution (SR) techniques. Com- pared with SR reconstruction using a single-contrast, multi-contrast SR reconstruction is promising to yield SR images with higher quality by leveraging diverse yet complementary information embedded in different imaging modalities. However, existing methods still have two shortcomings: (1) they neglect that the multi-contrast features at different scales contain different anatomical details and hence lack effective mechanisms to match and fuse these features for better reconstruction; and (2) they are still deficient in capturing long-range dependencies, which are essential for the regions with complicated anatomical structures. We propose a novel network to comprehensively address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques; we call it McMRSR. Firstly, we tame transformers to model long-range dependencies in both reference and target images. Then, a new multi-scale contextual matching method is proposed to capture corresponding contexts from reference features at different scales. Furthermore, we introduce a multi-scale aggregation mechanism to gradually and interactively aggregate multi-scale matched features for reconstructing the target SR MR image. Extensive experiments demonstrate that our network outperforms state-of-the-art approaches and has great potential to be applied in clinical practice.
引用
收藏
页码:20604 / 20613
页数:10
相关论文
共 48 条
[1]  
Bhatia KK, 2014, I S BIOMED IMAGING, P947, DOI 10.1109/ISBI.2014.6868028
[2]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[3]   Accuracy of 3-T MRI using susceptibility-weighted imaging to detect meniscal tears of the knee [J].
Chen, Wei ;
Zhao, Jun ;
Wen, Yaming ;
Xie, Bin ;
Zhou, Xuanling ;
Guo, Lin ;
Yang, Liu ;
Wang, Jian ;
Dai, Yongming ;
Zhou, Daiquan .
KNEE SURGERY SPORTS TRAUMATOLOGY ARTHROSCOPY, 2015, 23 (01) :198-204
[4]  
Chen YH, 2018, I S BIOMED IMAGING, P739
[5]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[6]   Learning a Deep Convolutional Network for Image Super-Resolution [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
COMPUTER VISION - ECCV 2014, PT IV, 2014, 8692 :184-199
[7]   Super-resolution reconstruction of single anisotropic 3D MR images using residual convolutional neural network [J].
Du, Jinglong ;
He, Zhongshi ;
Wang, Lulu ;
Gholipour, Ali ;
Zhou, Zexun ;
Chen, Dingding ;
Jia, Yuanyuan .
NEUROCOMPUTING, 2020, 392 :209-220
[8]   Brain MRI super-resolution using coupled-projection residual network [J].
Feng, Chun-Mei ;
Wang, Kai ;
Lu, Shijian ;
Xu, Yong ;
Li, Xuelong .
NEUROCOMPUTING, 2021, 456 :190-199
[9]  
Feng Chun-Mei, 2021, MICCAI
[10]  
Feng Chun-Mei, 2021, ACCELERATED MULTIMOD