Wide Weighted Attention Multi-Scale Network for Accurate MR Image Super-Resolution

被引:34
作者
Wang, Haoqian [1 ,2 ]
Hu, Xiaowan [1 ,2 ]
Zhao, Xiaole [3 ]
Zhang, Yulun [4 ]
机构
[1] Tsinghua Univ, Int Grad Sch Shenzhen, Beijing 100084, Peoples R China
[2] Shenzhen Inst Future Media Technol, Shenzhen 518055, Peoples R China
[3] Southwest Jiaotong Univ SWJTU, Sch Informat Sci & Technol, Chengdu 611756, Peoples R China
[4] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
关键词
Magnetic resonance; super-resolution; multiscale; non-reduction attention mechanism; weighted fusion; RESOLUTION;
D O I
10.1109/TCSVT.2021.3070489
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
High-quality magnetic resonance (MR) images afford more detailed information for reliable diagnoses and quantitative image analyses. Given low-resolution (LR) images, the deep convolutional neural network (CNN) has shown its promising ability for image super-resolution (SR). The LR MR images usually share some visual characteristics: structural textures of different sizes, edges with high correlation, and less informative background. However, multi-scale structural features are informative for image reconstruction, while the background is more smooth. Most previous CNN-based SR methods use a single receptive field and equally treat the spatial pixels (including the background). It neglects to sense the entire space and get diversified features from the input, which is critical for high-quality MR image SR. We propose a wide weighted attention multi-scale network (W(2)AMSN) for accurate MR image SR to address these problems. On the one hand, the features of varying sizes can be extracted by the wide multi-scale branches. On the other hand, we design a non-reduction attention mechanism to recalibrate feature responses adaptively. Such attention preserves continuous cross-channel interaction and focuses on more informative regions. Meanwhile, the learnable weighted factors fuse extracted features selectively. The encapsulated wide weighted attention multi-scale block (W(2)AMSB) is integrated through a recurrent framework and global attention mechanism. Extensive experiments and diversified ablation studies show the effectiveness of our proposed W(2)AMSN, which surpasses state-of-the-art methods on most popular MR image SR benchmarks quantitatively and qualitatively. And our method still offers superior accuracy and adaptability on real MR images.
引用
收藏
页码:962 / 975
页数:14
相关论文
共 58 条
[1]  
[Anonymous], 2016, P MACHINE LEARNING R
[2]  
Ashraf K., 2016, SQUEEZENET ALEXNET L
[3]   Resolution enhancement in MRI [J].
Carmi, E ;
Liu, SY ;
Alon, N ;
Fiat, A ;
Fiat, D .
MAGNETIC RESONANCE IMAGING, 2006, 24 (02) :133-154
[4]   Efficient and Accurate MRI Super-Resolution Using a Generative Adversarial Network and 3D Multi-level Densely Connected Network [J].
Chen, Yuhua ;
Shi, Feng ;
Christodoulou, Anthony G. ;
Xie, Yibin ;
Zhou, Zhengwei ;
Li, Debiao .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT I, 2018, 11070 :91-99
[5]  
Chen YH, 2018, I S BIOMED IMAGING, P739
[6]  
Cohen Nadav, 2016, 29 ANN C LEARNING TH, DOI 10.48550/arXiv.1509.05009
[7]   Second-order Attention Network for Single Image Super-Resolution [J].
Dai, Tao ;
Cai, Jianrui ;
Zhang, Yongbing ;
Xia, Shu-Tao ;
Zhang, Lei .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11057-11066
[8]   Accelerating the Super-Resolution Convolutional Neural Network [J].
Dong, Chao ;
Loy, Chen Change ;
Tang, Xiaoou .
COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 :391-407
[9]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[10]   Example-based super-resolution [J].
Freeman, WT ;
Jones, TR ;
Pasztor, EC .
IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2002, 22 (02) :56-65