Lightweight multi-scale distillation attention network for image super-resolution

被引:0
|
作者
Tang, Yinggan [1 ,2 ,3 ]
Hu, Quanwei [1 ]
Bu, Chunning [4 ]
机构
[1] Yanshan Univ, Sch Elect Engn, Qinhuangdao 066004, Hebei, Peoples R China
[2] Yanshan Univ, Key Lab Intelligent Rehabil & Neromodulat Hebei Pr, Qinhuangdao 066004, Hebei, Peoples R China
[3] Yanshan Univ, Key Lab Intelligent Control & Neural Informat Proc, Minist Educ, Qinhuangdao 066000, Hebei, Peoples R China
[4] Cangzhou Jiaotong Coll, Sch Elect & Elect Engn, Cangzhou 061110, Hebei, Peoples R China
关键词
Super-resolution; Lightweight network; Convolutional neural network; Information distillation; Multi-scale;
D O I
10.1016/j.knosys.2024.112807
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) with deep structure have achieved remarkable image super-resolution (SR) performance. However, the dramatically increased model parameters and computations make them difficult to deploy on low-computing-power devices. To address this issue, a lightweight multi-scale distillation attention network (MSDAN) is proposed for image SR in this paper. Specially, we design an effective branch fusion block (EBFB) by utilizing pixel attention with different kernel sizes via distillation connection, which can extract features from different receptive fields and obtain the attention coefficients for all pixels in the feature maps. Additionally, we further propose an enhanced multi-scale spatial attention (EMSSA) by utilizing AdaptiveMaxPooland convolution kernel with different sizes to construct multiple downsampling branches, which possesses adaptive spatial information extraction ability and maintains large receptive field. Extensive experiments demonstrate the superiority of the proposed model over most state-of-the-art (SOTA) lightweight SR models. Most importantly, compared to residual feature distillation network (RFDN), the proposed model achieves 0.11 improvement of PSNR on Set14 dataset with 57.5% fewer parameters and 20.3% less computational cost at x4 upsampling factor. The code of this paper is available at https://github. com/Supereeeee/MSDAN.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Attention-enhanced multi-scale residual network for single image super-resolution
    Yubin Sun
    Jiongming Qin
    Xuliang Gao
    Shuiqin Chai
    Bin Chen
    Signal, Image and Video Processing, 2022, 16 : 1417 - 1424
  • [32] Attention-enhanced multi-scale residual network for single image super-resolution
    Sun, Yubin
    Qin, Jiongming
    Gao, Xuliang
    Chai, Shuiqin
    Chen, Bin
    SIGNAL IMAGE AND VIDEO PROCESSING, 2022, 16 (05) : 1417 - 1424
  • [33] Scale-Aware Distillation Network for Lightweight Image Super-Resolution
    Lu, Haowei
    Lu, Yao
    Li, Gongping
    Sun, Yanbei
    Wang, Shunzhou
    Li, Yugang
    PATTERN RECOGNITION AND COMPUTER VISION,, PT III, 2021, 13021 : 128 - 139
  • [34] Single image super-resolution based on multi-scale dense attention network
    Farong Gao
    Yong Wang
    Zhangyi Yang
    Yuliang Ma
    Qizhong Zhang
    Soft Computing, 2023, 27 : 2981 - 2992
  • [35] Lightweight Image Super-Resolution with Information Multi-distillation Network
    Hui, Zheng
    Gao, Xinbo
    Yang, Yunchu
    Wang, Xiumei
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2024 - 2032
  • [36] Single image super-resolution based on multi-scale dense attention network
    Gao, Farong
    Wang, Yong
    Yang, Zhangyi
    Ma, Yuliang
    Zhang, Qizhong
    SOFT COMPUTING, 2023, 27 (06) : 2981 - 2992
  • [37] Single Image Super-Resolution Using Multi-scale Convolutional Neural Network
    Jia, Xiaoyi
    Xu, Xiangmin
    Cai, Bolun
    Guo, Kailing
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT I, 2018, 10735 : 149 - 157
  • [38] Image super-resolution reconstruction with multi-scale attention fusion
    Chen, Chun-yi
    Wu, Xin-yi
    Hu, Xiao-juan
    Yu, Hai-yang
    CHINESE OPTICS, 2023, 16 (05) : 1034 - 1044
  • [39] TBNet: Stereo Image Super-Resolution with Multi-Scale Attention
    Zhu, Jiyang
    Han, Xue
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2023, 32 (18)
  • [40] A lightweight multi-scale feature integration network for real-time single image super-resolution
    He, Zheng
    Liu, Kai
    Liu, Zitao
    Dou, Qingyu
    Yang, Xiaomin
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2021, 18 (04) : 1221 - 1234