Image super-resolution reconstruction based on dynamic attention network

被引:0
作者
Zhao X.-Q. [1 ,2 ,3 ]
Wang Z. [1 ]
Song Z.-Y. [1 ]
Jiang H.-M. [1 ,2 ,3 ]
机构
[1] School of Electrical Engineering and Information Engineering, Lanzhou University of Technology, Lanzhou
[2] Key Laboratory of Gansu Advanced Control for Industrial Processes, Lanzhou
[3] National Experimental Teaching Center of Electrical and Control Engineering, Lanzhou University of Technology, Lanzhou
来源
Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science) | 2023年 / 57卷 / 08期
关键词
attention mechanism; double butterfly structure; dynamic convolution; image processing; image super-resolution;
D O I
10.3785/j.issn.1008-973X.2023.08.002
中图分类号
学科分类号
摘要
The image super-resolution algorithm adopts the same processing mode in channels and spatial domains with different importance, which leads to the failure of computing resources to concentrate on important features. Aiming at the above problem, an image super-resolution algorithm based on dynamic attention network was proposed. Firstly, the existing way of equalizing attention mechanisms was changed, and dynamic learning weights were assigned to different attention mechanisms by constructed dynamic attention modules, by which high-frequency information more needed by the network was obtained and high-quality pictures were reconstructed. Secondly, the double butterfly structure was constructed through feature reuse, which fully integrated the information from the two branches of attention and compensated for the missing feature information between the different attention mechanisms. Finally, model evaluation was conducted on Set5, Set14, BSD100, Urban100 and Manga109 datasets. Results show that the proposed algorithm has better overall performance than other mainstream super-resolution algorithms. When the amplification factor was 4, compared with the sub-optimal algorithm, the peak signal-to-noise ratio values were improved by 0.06, 0.07, 0.04, 0.15 and 0.15 dB, respectively, on the above five public test sets. © 2023 Zhejiang University. All rights reserved.
引用
收藏
页码:1487 / 1494
页数:7
相关论文
共 34 条
[1]  
SI W, HAN J, YANG Z, Et al., Research on key techniques for super-resolution reconstruction of satellite remote sensing images of transmission lines, Journal of Physics: Conference Series, (2021)
[2]  
DEEBA F, KUN S, DHAREJO F A, Et al., Sparse representation based computed tomography images reconstruction by coupled dictionary learning algorithm [J], IET Image Processing, 14, 11, pp. 2365-2375, (2020)
[3]  
ZHANG F, LIU N, CHANG L, Et al., Edge-guided single facial depth map super-resolution using CNN [J], IET Image Processing, 14, 17, pp. 4708-4716, (2020)
[4]  
LI W, LIAO W., Stable super-resolution limit and smallest singular value of restricted Fourier matrices [J], Applied and Computational Harmonic Analysis, 51, pp. 118-156, (2021)
[5]  
WU Shi-hao, LUO Xiao-hua, ZHANG Jian-wei, Et al., FPGA-based hardware implementation of new edge-directed interpolation algorithm [J], Journal of Zhejiang University: Engineering Science, 52, 11, pp. 2226-2232, (2018)
[6]  
DUAN Ran, ZHOU Deng-wen, ZHAO Li-juan, Et al., Image super-resolution reconstruction based on multi-scale feature mapping network [J], Journal of Zhejiang University: Engineering Science, 53, 7, pp. 1331-1339, (2019)
[7]  
DONG C, LOY C C, HE K, Et al., Learning a deep convolutional network for image super-resolution [C], European Conference on Computer Vision, pp. 184-199, (2014)
[8]  
DONG C, LOY C C, HE K, Et al., Image super-resolution using deep convolutional networks [J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 2, pp. 295-307, (2015)
[9]  
LIM B, SON S, KIM H, Et al., Enhanced deep residual networks for single image super-resolution [C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136-144, (2017)
[10]  
TAI Y, YANG J, LIU X, Et al., Memnet: a persistent memory network for image restoration [C], Proceedings of the IEEE International Conference on Computer Vision, pp. 4539-4547, (2017)