Image super-resolution reconstruction network with dual attention and structural similarity measure

被引:6
作者
You-wen, Huang [1 ]
Xin, Tang [1 ]
Bin, Zhou [1 ]
机构
[1] Jiangxi Univ Sci & Technol, Sch Informat Engn, Ganzhou 341000, Peoples R China
关键词
super-resolution; U-net network; data augmentation; dual attention; structural similarity;
D O I
10.37188/CJLCD.2021-0178
中图分类号
O7 [晶体学];
学科分类号
0702 ; 070205 ; 0703 ; 080501 ;
摘要
Aiming at the problem that the solution space of mapping function from low resolution image to high resolution image is extremely large, which makes it difficult for super-resolution reconstruction models to generate detailed textures, this paper proposes a image super resolution that combines dual attention and structural similarity measure. With the improved U-Net network model as the basic structure, the data augmentation methods for low-level vision tasks are introduced to increase sample diversity. The encoder is composed of a convolution layer and an adaptive parameter linear rectifier function (Dynamic ReLU). At the same time, a residual dual attention module(RDAM) is introduced, which forms a decoder together with the Pixel Shuffle module. The image is enlarged gradually through the up-sampling operation. In order to make the generated image more in line with the human visual characteristics, a loss function combined with structural similarity measurement criteria is proposed to enhance the network constraints. The experimental results show that the average PSNR of the q uality of the reconstructed image on the Set5, Set14, BSD100 and Urban100 standard test sets is improved by about 1.64 dB, and the SSIM is improved by about 0.047 compared with SRCNN. The proposed method can make the reconstructed image texture more detailed and reduce the possible solution space of the mapping function effectively.
引用
收藏
页码:367 / 375
页数:10
相关论文
共 22 条
[1]  
AHN N, 2018, PROCEEDINGSOF 15THEU
[2]   A Deep Journey into Super-resolution: A Survey [J].
Anwar, Saeed ;
Khan, Salman ;
Barnes, Nick .
ACM COMPUTING SURVEYS, 2020, 53 (03)
[3]  
CHEN Y P, DYNAMIC RELU EB OL 2
[4]  
CHENZ H, SINGLEFRAMEIMAGESUPE, V36
[5]  
DONGC LOYCC, 2016, IEEE T G ACTIONSON P, V38
[6]  
GUOY CHENJ, 2020, 2020IEEE CVF CONFERE
[7]  
HARIS M, 2018, 2018 IEEE CVFCONFERE
[8]  
HEK M, 2016, 2016 IEEE C GEN COMP
[9]  
KIM J., 2016, 2016IEEE CONFERENCEO
[10]  
KIMJ J., 2016, IEEECONFERENCEONCOMP