Triple-Attention Mixed-Link Network for Single-Image Super-Resolution

被引:7
作者
Cheng, Xi [1 ]
Li, Xiang [2 ]
Yang, Jian [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Minist Educ, Key Lab Intelligent Percept & Syst High Dimens In, Nanjing 210094, Jiangsu, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Jiangsu Key Lab Image & Video Understanding Socia, Nanjing 210094, Jiangsu, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2019年 / 9卷 / 15期
关键词
super-resolution; mixed-link networks; triple-attention;
D O I
10.3390/app9152992
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Featured Application Single-image super-resolution (SISR) is an important low-level computer-vision task with high practical value in many fields such as industrial inspection, medical imaging and security monitoring. Abstract Single-image super-resolution is of great importance as a low-level computer-vision task. Recent approaches with deep convolutional neural networks have achieved impressive performance. However, existing architectures have limitations due to the less sophisticated structure along with less strong representational power. In this work, to significantly enhance the feature representation, we proposed triple-attention mixed-link network (TAN), which consists of (1) three different aspects (i.e., kernel, spatial, and channel) of attention mechanisms and (2) fusion of both powerful residual and dense connections (i.e., mixed link). Specifically, the network with multi-kernel learns multi-hierarchical representations under different receptive fields. The features are recalibrated by the effective kernel and channel attention, which filters the information and enables the network to learn more powerful representations. The features finally pass through the spatial attention in the reconstruction network, which generates a fusion of local and global information, lets the network restore more details, and improves the reconstruction quality. The proposed network structure decreases 50% of the parameter growth rate compared with previous approaches. The three attention mechanisms provide 0.49 dB, 0.58 dB, and 0.32 dB performance gain when evaluating on Set5, Set14, and BSD100. Thanks to the diverse feature recalibrations and the advanced information flow topology, our proposed model is strong enough to perform against the state-of-the-art methods on the benchmark evaluations.
引用
收藏
页数:15
相关论文
共 30 条
[11]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[12]  
Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
[13]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269
[14]  
Huang JB, 2015, PROC CVPR IEEE, P5197, DOI 10.1109/CVPR.2015.7299156
[15]   Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1026-1034
[16]  
Kim J, 2016, PROC CVPR IEEE, P1637, DOI [10.1109/CVPR.2016.181, 10.1109/CVPR.2016.182]
[17]  
Lin M, 2014, PUBLIC HEALTH NUTR, V17, P2029, DOI [10.1109/PLASMA.2013.6634954, 10.1017/S1368980013002176]
[18]  
Martin D, 2001, EIGHTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOL II, PROCEEDINGS, P416, DOI 10.1109/ICCV.2001.937655
[19]   Sketch-based manga retrieval using manga109 dataset [J].
Matsui, Yusuke ;
Ito, Kota ;
Aramaki, Yuji ;
Fujimoto, Azuma ;
Ogawa, Toru ;
Yamasaki, Toshihiko ;
Aizawa, Kiyoharu .
MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (20) :21811-21838
[20]   EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis [J].
Sajjadi, Mehdi S. M. ;
Schoelkopf, Bernhard ;
Hirsch, Michael .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4501-4510