Single MR image super-resolution via channel splitting and serial fusion network

被引:14
作者
Zhao, Xiaole [1 ]
Zhang, Yulun [2 ]
Qin, Yun [3 ]
Wang, Qian [4 ]
Zhang, Tao [3 ]
Li, Tianrui [1 ]
机构
[1] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Chengdu 611756, Sichuan, Peoples R China
[2] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[3] Univ Elect Sci & Technol China, Sch Life Sci & Technol, Chengdu 611731, Sichuan, Peoples R China
[4] Tangshan Seism Stn Hebei Earthquake Agcy, Tangshan 066300, Hebei, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional neural network; Magnetic resonance imaging; Channel splitting; Super-resolution; Serial fusion;
D O I
10.1016/j.knosys.2022.108669
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In magnetic resonance imaging (MRI), spatial resolution is an important and critical imaging parameter that represents how much information is contained in a unit space. Acquiring high-resolution MRI data usually takes a long scanning time and is subject to motion artifacts due to hardware, physical, and physiological limitations. Single image super-resolution (SISR) based on deep learning is an effective and promising alternative technique to improve the native spatial resolution of magnetic resonance (MR) images. However, because of the simple diversity and single distribution of training samples, the effective training of deep models with medical training samples and improvement of the tradeoff between model performance and computing overhead are major challenges. In addition, deeper networks are more difficult to effectively train since the information is gradually weakened as the network deepens. In this paper, a novel channel splitting and serial fusion network (CSSFN) is presented for single MR image super-resolution. The proposed CSSFN splits hierarchical features into a series of subfeatures, which are then integrated together in a serial manner. Hence, the network becomes deeper and can discriminatively and reasonably deal with the subfeatures. Moreover, a dense global feature fusion (DGFF) is adopted to integrate the intermediate features, which further promotes the information flow in the network and helps to stabilize model training. Extensive experiments on several typical MR images show the superiority of our CSSFN models to other advanced SISR methods. (C) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:15
相关论文
共 49 条
[1]   Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network [J].
Ahn, Namhyuk ;
Kang, Byungkon ;
Sohn, Kyung-Ah .
COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 :256-272
[2]  
Chen YH, 2018, I S BIOMED IMAGING, P739
[3]   Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search [J].
Chu, Xiangxiang ;
Zhang, Bo ;
Ma, Hailong ;
Xu, Ruijun ;
Li, Qingyuan .
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, :59-64
[4]   Second-order Attention Network for Single Image Super-Resolution [J].
Dai, Tao ;
Cai, Jianrui ;
Zhang, Yongbing ;
Xia, Shu-Tao ;
Zhang, Lei .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11057-11066
[5]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[6]   Example-based super-resolution [J].
Freeman, WT ;
Jones, TR ;
Pasztor, EC .
IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2002, 22 (02) :56-65
[7]  
Glorot X, 2010, P 13 INT C ARTIFICIA, P249
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]  
Hu J., 2016, 8 INT C DIG IM PROC, P10033
[10]   Reinforcement Learning to Rank in E-Commerce Search Engine: Formalization, Analysis, and Application [J].
Hu, Yujing ;
Da, Qing ;
Zeng, Anxiang ;
Yu, Yang ;
Xu, Yinghui .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :368-377