Scale-Aware Distillation Network for Lightweight Image Super-Resolution

被引:1
作者
Lu, Haowei [1 ]
Lu, Yao [1 ]
Li, Gongping [1 ]
Sun, Yanbei [1 ]
Wang, Shunzhou [1 ]
Li, Yugang [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing Lab Intelligent Informat Technol, Beijing 100081, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION,, PT III | 2021年 / 13021卷
基金
中国国家自然科学基金;
关键词
Image super-resolution; Lightweight network; Multi-scale feature learning; Context learning;
D O I
10.1007/978-3-030-88010-1_11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many lightweight models have achieved great progress in single image super-resolution. However, their parameters are still too many to be applied in practical applications, and it still has space for parameter reduction. Meanwhile, multi-scale features are usually underutilized by researchers, which are better for multi-scale regions' reconstruction. With the renaissance of deep learning, convolution neural network based methods has prompted many computer vision tasks (e.g., video object segmentation [21,38,40], human parsing [39], human-object interaction detection [39]) to achieve significant progresses. To solve this limitation, in this paper, we propose a lightweight super-resolution network named scale-aware distillation network (SDNet). SDNet is built on many stacked scale-aware distillation blocks (SDB), which contain a scale-aware distillation unit (SDU) and a context enhancement (CE) layer. Specifically, SDU enriches the hierarchical features at a granular level via grouped convolution. Meanwhile, the CE layer further enhances the multi-scale feature representation from SDU by context learning to extract more discriminative information. Extensive experiments are performed on commonly-used super-resolution datasets, and our method achieves promising results against other state-of-the-art methods with fewer parameters.
引用
收藏
页码:128 / 139
页数:12
相关论文
共 40 条
[1]   Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network [J].
Ahn, Namhyuk ;
Kang, Byungkon ;
Sohn, Kyung-Ah .
COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 :256-272
[2]   Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding [J].
Bevilacqua, Marco ;
Roumy, Aline ;
Guillemot, Christine ;
Morel, Marie-Line Alberi .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2012, 2012,
[3]   Learning a Deep Convolutional Network for Image Super-Resolution [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
COMPUTER VISION - ECCV 2014, PT IV, 2014, 8692 :184-199
[4]  
Feng RC, 2020, Arxiv, DOI [arXiv:2008.00239, 10.48550/arXiv.2008.00239, DOI 10.48550/ARXIV.2008.00239]
[5]   Image Super-Resolution Using Knowledge Distillation [J].
Gao, Qinquan ;
Zhao, Yan ;
Li, Gen ;
Tong, Tong .
COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 :527-541
[6]   GhostNet: More Features from Cheap Operations [J].
Han, Kai ;
Wang, Yunhe ;
Tian, Qi ;
Guo, Jianyuan ;
Xu, Chunjing ;
Xu, Chang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1577-1586
[7]   Deep Back-Projection Networks For Super-Resolution [J].
Haris, Muhammad ;
Shakhnarovich, Greg ;
Ukita, Norimichi .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1664-1673
[8]  
Hu J., 2018, PROC CVPR IEEE, P7132
[9]  
Huang JB, 2015, PROC CVPR IEEE, P5197, DOI 10.1109/CVPR.2015.7299156
[10]   Lightweight Image Super-Resolution with Information Multi-distillation Network [J].
Hui, Zheng ;
Gao, Xinbo ;
Yang, Yunchu ;
Wang, Xiumei .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :2024-2032