Feature distillation network for efficient super-resolution with vast receptive field

被引:0
|
作者
Zhang, Yanfeng [1 ]
Tan, Wenan [1 ]
Mao, Wenyi [1 ]
机构
[1] Shanghai Polytech Univ, Sch Comp & Informat Engn, Jinhai Rd, Shanghai 200000, Peoples R China
关键词
Convolution neural network; Single image super-resolution; Large Kernal attention mechanism; IMAGE SUPERRESOLUTION;
D O I
10.1007/s11760-024-03750-9
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, convolutional neural networks have seen rapid advancements, leading to the proposal of numerous lightweight image super-resolution techniques tailored for deployment on edge devices. This paper examines the information distillation mechanism and the vast-receptive-field attention mechanism utilized in lightweight super-resolution. Additionally, it introduces a new network structure named the vast-receptive-field feature distillation network, named VFDN, which effectively enhances inference speed and reduces GPU memory consumption. The receptive field of the attention block is expanded, and the utilization of large dense convolution kernels is substituted with depth-wise separable convolutions. Meanwhile, we modify the reconstruction block to obtain better reconstruction quality and introduce a Fourier transform-based loss function that emphasizes the frequency domain information of the input image. Experiments show that the designed VFDN achieves comparable results to RFDN, but the parameters are only 307K(55.81%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} of RFDN), which is advantageous for deployment on edge devices.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] 1D kernel distillation network for efficient image super-resolution
    Li, Yusong
    Xu, Longwei
    Yang, Weibin
    Geng, Dehua
    Xu, Mingyuan
    Dong, Zhiqi
    Wang, Pengwei
    IMAGE AND VISION COMPUTING, 2025, 154
  • [22] Multi-scale information distillation network for efficient image super-resolution
    Hu, Yanting
    Huang, Yuanfei
    Zhang, Kaibing
    KNOWLEDGE-BASED SYSTEMS, 2023, 275
  • [23] Adaptive Feature Selection Modulation Network for Efficient Image Super-Resolution
    Wu, Chen
    Wang, Ling
    Su, Xin
    Zheng, Zhuoran
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 1231 - 1235
  • [24] Efficient Hybrid Feature Interaction Network for Stereo Image Super-Resolution
    Song, Jianwen
    Sowmya, Arcot
    Sun, Changming
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 10094 - 10105
  • [25] Feature-Domain Adaptive Contrastive Distillation for Efficient Single Image Super-Resolution
    Moon, Hyeon-Cheol
    Kim, Jae-Gon
    Jeong, Jinwoo
    Kim, Sungjei
    IEEE ACCESS, 2023, 11 : 131885 - 131896
  • [26] FAKD: FEATURE-AFFINITY BASED KNOWLEDGE DISTILLATION FOR EFFICIENT IMAGE SUPER-RESOLUTION
    He, Zibin
    Dai, Tao
    Lu, Jian
    Jiang, Yong
    Xia, Shu-Tao
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 518 - 522
  • [27] Efficient Attention Fusion Feature Extraction Network for Image Super-Resolution
    Wang, Tuoran
    Cheng, Na
    Ding, Shijia
    Wang, Hongyu
    ACM International Conference Proceeding Series, 2023, : 35 - 44
  • [28] Residual multi-branch distillation network for efficient image super-resolution
    Gao, Xiang
    Zhou, Ying
    Wu, Sining
    Wu, Xinrong
    Wang, Fan
    Hu, Xiaopeng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (30) : 75217 - 75241
  • [29] Asymmetric Large Kernel Distillation Network for efficient single image super-resolution
    Qu, Daokuan
    Ke, Yuyao
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [30] Balanced Spatial Feature Distillation and Pyramid Attention Network for Lightweight Image Super-resolution
    Gendy, Garas
    Sabor, Nabil
    Hou, Jingchao
    He, Guanghui
    NEUROCOMPUTING, 2022, 509 (157-166) : 157 - 166