Feature distillation network for efficient super-resolution with vast receptive field

被引:0
作者
Zhang, Yanfeng [1 ]
Tan, Wenan [1 ]
Mao, Wenyi [1 ]
机构
[1] Shanghai Polytech Univ, Sch Comp & Informat Engn, Jinhai Rd, Shanghai 200000, Peoples R China
关键词
Convolution neural network; Single image super-resolution; Large Kernal attention mechanism; IMAGE SUPERRESOLUTION;
D O I
10.1007/s11760-024-03750-9
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, convolutional neural networks have seen rapid advancements, leading to the proposal of numerous lightweight image super-resolution techniques tailored for deployment on edge devices. This paper examines the information distillation mechanism and the vast-receptive-field attention mechanism utilized in lightweight super-resolution. Additionally, it introduces a new network structure named the vast-receptive-field feature distillation network, named VFDN, which effectively enhances inference speed and reduces GPU memory consumption. The receptive field of the attention block is expanded, and the utilization of large dense convolution kernels is substituted with depth-wise separable convolutions. Meanwhile, we modify the reconstruction block to obtain better reconstruction quality and introduce a Fourier transform-based loss function that emphasizes the frequency domain information of the input image. Experiments show that the designed VFDN achieves comparable results to RFDN, but the parameters are only 307K(55.81%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} of RFDN), which is advantageous for deployment on edge devices.
引用
收藏
页数:9
相关论文
共 50 条
[21]   FAKD: FEATURE-AFFINITY BASED KNOWLEDGE DISTILLATION FOR EFFICIENT IMAGE SUPER-RESOLUTION [J].
He, Zibin ;
Dai, Tao ;
Lu, Jian ;
Jiang, Yong ;
Xia, Shu-Tao .
2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, :518-522
[22]   Adaptive Feature Selection Modulation Network for Efficient Image Super-Resolution [J].
Wu, Chen ;
Wang, Ling ;
Su, Xin ;
Zheng, Zhuoran .
IEEE SIGNAL PROCESSING LETTERS, 2025, 32 :1231-1235
[23]   Residual multi-branch distillation network for efficient image super-resolution [J].
Gao, Xiang ;
Zhou, Ying ;
Wu, Sining ;
Wu, Xinrong ;
Wang, Fan ;
Hu, Xiaopeng .
MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (30) :75217-75241
[24]   Asymmetric Large Kernel Distillation Network for efficient single image super-resolution [J].
Qu, Daokuan ;
Ke, Yuyao .
FRONTIERS IN NEUROSCIENCE, 2024, 18
[25]   Balanced Spatial Feature Distillation and Pyramid Attention Network for Lightweight Image Super-resolution [J].
Gendy, Garas ;
Sabor, Nabil ;
Hou, Jingchao ;
He, Guanghui .
NEUROCOMPUTING, 2022, 509 (157-166) :157-166
[26]   Lightweight image super-resolution with group-convolutional feature enhanced distillation network [J].
Wei Zhang ;
Zhongqiang Fan ;
Yan Song ;
Yagang Wang .
International Journal of Machine Learning and Cybernetics, 2023, 14 :2467-2482
[27]   Lightweight image super-resolution with group-convolutional feature enhanced distillation network [J].
Zhang, Wei ;
Fan, Zhongqiang ;
Song, Yan ;
Wang, Yagang .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (07) :2467-2482
[28]   Perception-oriented Single Image Super-Resolution Network with Receptive Field Block [J].
Wei Zhang ;
Yaqing Hou ;
Wanshu Fan ;
Xin Yang ;
Dongsheng Zhou ;
Qiang Zhang ;
Xiaopeng Wei .
Neural Computing and Applications, 2022, 34 :14845-14858
[29]   Perception-oriented Single Image Super-Resolution Network with Receptive Field Block [J].
Zhang, Wei ;
Hou, Yaqing ;
Fan, Wanshu ;
Yang, Xin ;
Zhou, Dongsheng ;
Zhang, Qiang ;
Wei, Xiaopeng .
NEURAL COMPUTING & APPLICATIONS, 2022, 34 (17) :14845-14858
[30]   Multi-scale receptive field fusion network for lightweight image super-resolution [J].
Luo, Jing ;
Zhao, Lin ;
Zhu, Li ;
Tao, Wenbing .
NEUROCOMPUTING, 2022, 493 :314-326