Accurate single image super-resolution using multi-path wide-activated residual network

被引:21
作者
Chang, Kan [1 ]
Li, Minghong [1 ]
Ding, Pak Lun Kevin [2 ]
Li, Baoxin [2 ]
机构
[1] Guangxi Univ, Sch Comp & Elect Informat, Nanning 530004, Peoples R China
[2] Arizona State Univ, Dept Comp Sci & Engn, Tempe, AZ 85287 USA
基金
中国国家自然科学基金;
关键词
Super-resolution; Convolutional neural network; Residual learning; Multi-Scale learning; Channel attention; FEATURE FUSION NETWORK; REPRESENTATION;
D O I
10.1016/j.sigpro.2020.107567
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In many recent image super-resolution (SR) methods based on convolutional neural networks (CNNs), the superior performance was achieved by training very large networks, which may not be suitable for real-world applications with limited computing resources. Therefore, it is necessary to develop more compact networks that achieve a better trade-off between the model size and the performance. In this paper, we propose an efficient and effective network called multi-path wide-activated residual network (MWRN). Firstly, as the basic building block of MWRN, the multi-path wide-activated residual block (MWRB) is presented to extract the multi-scale features. MWRB consists of three parallel wide-activated residual paths, where the dilated convolutions with different dilation factors are used to increase the receptive fields. Secondly, the fusional channel attention (FCA) module, which contains a bottleneck layer and a multi-path wide-activated residual channel attention (MWRCA) block, is designed to well exploit the multi-level features in MWRN. In each FCA, the MWRCA block refines the fused features by taking the interdependencies among feature channels into consideration. The experiments demonstrate that, compared with the state-of-the-art methods, the proposed MWRN model is able to provide very competitive performance with a relatively small number of parameters. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 55 条
[1]   NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study [J].
Agustsson, Eirikur ;
Timofte, Radu .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1122-1131
[2]   Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network [J].
Ahn, Namhyuk ;
Kang, Byungkon ;
Sohn, Kyung-Ah .
COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 :256-272
[3]  
[Anonymous], 2015, Tiny ImageNet Visual Recognition Challenge., DOI DOI 10.1109/ICCV.2015.123
[4]   Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding [J].
Bevilacqua, Marco ;
Roumy, Aline ;
Guillemot, Christine ;
Morel, Marie-Line Alberi .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2012, 2012,
[5]   Fast and accurate single image super-resolution via an energy-aware improved deep residual network [J].
Cao, Yanpeng ;
He, Zewei ;
Ye, Zhangyu ;
Li, Xin ;
Cao, Yanlong ;
Yang, Jiangxin .
SIGNAL PROCESSING, 2019, 162 :115-125
[6]   Data-adaptive low-rank modeling and external gradient prior for single image super-resolution [J].
Chang, Kan ;
Zhang, Xueyu ;
Ding, Pak Lun Kevin ;
Li, Baoxin .
SIGNAL PROCESSING, 2019, 161 :36-49
[7]   Single image super-resolution using collaborative representation and non-local self-similarity [J].
Chang, Kan ;
Ding, Pak Lun Kevin ;
Li, Baoxin .
SIGNAL PROCESSING, 2018, 149 :49-61
[8]   Single Image Super Resolution Using Joint Regularization [J].
Chang, Kan ;
Ding, Pak Lun Kevin ;
Li, Baoxin .
IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (04) :596-600
[9]   Single Image Super-Resolution via Adaptive Transform-Based Nonlocal Self-Similarity Modeling and Learning-Based Gradient Regularization [J].
Chen, Honggang ;
He, Xiaohai ;
Qing, Linbo ;
Teng, Qizhi .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (08) :1702-1717
[10]   Second-order Attention Network for Single Image Super-Resolution [J].
Dai, Tao ;
Cai, Jianrui ;
Zhang, Yongbing ;
Xia, Shu-Tao ;
Zhang, Lei .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11057-11066