Asymmetric Large Kernel Distillation Network for efficient single image super-resolution

被引:1
作者
Qu, Daokuan [1 ,2 ]
Ke, Yuyao [3 ]
机构
[1] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou, Jiangsu, Peoples R China
[2] Shandong Polytech Coll, Sch Energy & Mat Engn, Jining, Shandong, Peoples R China
[3] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou, Jiangsu, Peoples R China
关键词
single image super-resolution; efficient method; asymmetric large kernel convolution; information distillation; convolutional neural network;
D O I
10.3389/fnins.2024.1502499
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Recently, significant advancements have been made in the field of efficient single-image super-resolution, primarily driven by the innovative concept of information distillation. This method adeptly leverages multi-level features to facilitate high-resolution image reconstruction, allowing for enhanced detail and clarity. However, many existing approaches predominantly emphasize the enhancement of distilled features, often overlooking the critical aspect of improving the feature extraction capabilities of the distillation module itself. In this paper, we address this limitation by introducing an asymmetric large-kernel convolution design. By increasing the size of the convolution kernel, we expand the receptive field, which enables the model to more effectively capture long-range dependencies among image pixels. This enhancement significantly improves the model's perceptual ability, leading to more accurate reconstructions. To maintain a manageable level of model complexity, we adopt a lightweight architecture that employs asymmetric convolution techniques. Building on this foundation, we propose the Lightweight Asymmetric Large Kernel Distillation Network (ALKDNet). Comprehensive experiments conducted on five widely recognized benchmark datasets-Set5, Set14, BSD100, Urban100, and Manga109-indicate that ALKDNet not only preserves efficiency but also demonstrates performance enhancements relative to existing super-resolution methods. The average PSNR and SSIM values show improvements of 0.10 dB and 0.0013, respectively, thereby achieving state-of-the art performance.
引用
收藏
页数:13
相关论文
共 51 条
[11]   Learning a Deep Convolutional Network for Image Super-Resolution [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
COMPUTER VISION - ECCV 2014, PT IV, 2014, 8692 :184-199
[12]   Fast and Memory-Efficient Network Towards Efficient Image Super-Resolution [J].
Du, Zongcai ;
Liu, Ding ;
Liu, Jie ;
Tang, Jie ;
Wu, Gangshan ;
Fu, Lean .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, :852-861
[13]   Anchor-based Plain Net for Mobile Image Super-Resolution [J].
Du, Zongcai ;
Liu, Jie ;
Tang, Jie ;
Wu, Gangshan .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, :2494-2502
[14]   Image Super-Resolution Using Knowledge Distillation [J].
Gao, Qinquan ;
Zhao, Yan ;
Li, Gen ;
Tong, Tong .
COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 :527-541
[15]  
Gendy Garas, 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), P1593, DOI 10.1109/CVPRW59228.2023.00161
[16]   Visual attention network [J].
Guo, Meng-Hao ;
Lu, Cheng-Ze ;
Liu, Zheng-Ning ;
Cheng, Ming-Ming ;
Hu, Shi-Min .
COMPUTATIONAL VISUAL MEDIA, 2023, 9 (04) :733-752
[17]   Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets [J].
Haase, Daniel ;
Amthor, Manuel .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :14588-14597
[18]  
He ZB, 2020, IEEE IMAGE PROC, P518, DOI [10.1109/ICIP40778.2020.9190917, 10.1109/icip40778.2020.9190917]
[19]  
Huang JB, 2015, PROC CVPR IEEE, P5197, DOI 10.1109/CVPR.2015.7299156
[20]   Lightweight image super-resolution with feature enhancement residual network [J].
Hui, Zheng ;
Gao, Xinbo ;
Wang, Xiumei .
NEUROCOMPUTING, 2020, 404 :50-60