Image super-resolution based on residually dense distilled attention network q

被引:8
作者
Dun, Yujie [1 ]
Da, Zongyang [1 ]
Yang, Shuai [1 ]
Qian, Xueming [1 ,2 ,3 ]
机构
[1] Xi An Jiao Tong Univ, Sch Informat & Commun Engn, Xian 710049, Peoples R China
[2] Xi An Jiao Tong Univ, Key Lab Intelligent Networks & Network Secur, Minist Educ, Xian 710049, Peoples R China
[3] Xi An Jiao Tong Univ, Sch Elect & Informat Engn, SMILES Lab, Xian 710049, Peoples R China
关键词
Single image super-resolution; Convolution neural network; Deep learning; Feature distillation; Attention mechanism;
D O I
10.1016/j.neucom.2021.02.008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep convolutional neural networks (CNNs) have been playing an increasingly important role in image super-resolution (SR). However, if we just deepen or widen the networks, it could result in the excess of parameters and the increase of training difficulty. In this paper, we propose a residually dense distilled attention network (RDDAN) to address the problems in SR. Residual networks could make full use of the information of previous layers. In RDDAN we propose a connection block group (CBG), which is stacked in the feature extraction module of the network. CBG consists of two parts, dense enhancement network (DEN) and channel attention producing (CAP) module. First, instead of simply stacking residual blocks, DEN utilizes feature distillation with both dense concatenation and skip connection to extract deep and shallow features, which could enhance the representation ability. Second, with attention mechanism, CAP pays attention to the channel-wise association to adjust channel-wise features and restore high frequency feature information. By evaluating the performance of results based on benchmark methods, our method achieves a more desirable performance than state-of-the-art methods. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:47 / 57
页数:11
相关论文
共 51 条
[1]   Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration [J].
Chen, Yunjin ;
Pock, Thomas .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) :1256-1272
[2]  
Cheng X, 2018, INT C PATT RECOG, P147, DOI 10.1109/ICPR.2018.8546130
[3]  
Dong C., 2016, LECT NOTES COMPUT SC, DOI DOI 10.1007/978-3-319-46475-6_25
[4]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[5]   Kernel-attended residual network for single image super-resolution [J].
Dun, Yujie ;
Da, Zongyang ;
Yang, Shuai ;
Xue, Yao ;
Qian, Xueming .
KNOWLEDGE-BASED SYSTEMS, 2021, 213
[6]   Learning low-level vision [J].
Freeman, WT ;
Pasztor, EC ;
Carmichael, OT .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2000, 40 (01) :25-47
[7]   Look Closer to See Better: Recurrent Attention Convolutional Neural Network for Fine-grained Image Recognition [J].
Fu, Jianlong ;
Zheng, Heliang ;
Mei, Tao .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4476-4484
[8]   Rich feature hierarchies for accurate object detection and semantic segmentation [J].
Girshick, Ross ;
Donahue, Jeff ;
Darrell, Trevor ;
Malik, Jitendra .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :580-587
[9]  
Glorot X., 2010, PROC 13 INT C ARTIF, P249
[10]   Deep Back-Projection Networks For Super-Resolution [J].
Haris, Muhammad ;
Shakhnarovich, Greg ;
Ukita, Norimichi .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1664-1673