A Hardware-Aware Network for Real-World Single Image Super-Resolutions

被引:0
作者
Ma R. [1 ,2 ]
Du X. [1 ,3 ]
机构
[1] Institute for Applied Life Sciences, University of Massachusetts, Center for Personalized Health Monitoring, Amherst, 01003, MA
[2] University of Massachusetts, Electrical and Computer Engineering Department, Amherst, 01003, MA
[3] University of Massachusetts, Mechanical and Industrial Engineering Department, Amherst, 01003, MA
来源
IEEE Transactions on Artificial Intelligence | 2024年 / 5卷 / 07期
基金
美国国家科学基金会;
关键词
Blind super-resolution; single image super-resolution (SISR); transfer learning;
D O I
10.1109/TAI.2024.3368372
中图分类号
学科分类号
摘要
Most single image super-resolution (SISR) methods are developed on synthetic low-resolution (LR) and high-resolution (HR) image pairs, which are simulated by a predetermined degradation operation, such as bicubic downsampling. However, these methods only learn the inverse process of the predetermined operation, which fails to super resolve the real-world LR images, whose true formulation deviates from the predetermined operation. To address this, we propose a novel super-resolution (SR) framework named hardware-aware super-resolution (HASR) network that first extracts hardware information, particularly the camera degradation information. The LR images are then super resolved by integrating the extracted information. To evaluate the performance of HASR network, we build a dataset named Real-Micron from real-world micron-scale patterns. The paired LR and HR images are captured by changing the objectives and registered using a developed registration algorithm. Transfer learning is implemented during the training of Real-Micron dataset due to the lack of amount of data. Experiments demonstrate that by integrating the degradation information, our proposed network achieves state-of-the-art performance for the blind SR task on both synthetic and real-world datasets. © 2020 IEEE.
引用
收藏
页码:3482 / 3496
页数:14
相关论文
共 53 条
  • [1] Ledig C., Et al., Photo-realistic single image super-resolution using a generative adversarial network, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 105-114, (2017)
  • [2] Wronski B., Et al., Handheld multi-frame super-resolution, ACM Trans. Graph. (TOG), 38, 4, pp. 1-18, (2019)
  • [3] Dong C., Loy C.C., He K., Tang X., Image super-resolution using deep convolutional networks, IEEE Trans. Pattern Anal. Mach. Intell., 38, 2, pp. 295-307, (2016)
  • [4] Lim B., Son S., Kim H., Nah S., Lee K.M., Enhanced deep residual networks for single image super-resolution, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), pp. 1132-1140, (2017)
  • [5] Kim J., Lee J.K., Lee K.M., Accurate image super-resolution using very deep convolutional networks, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1646-1654, (2016)
  • [6] Wang X., Xie L., Dong C., Shan Y., Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data, Proc. IEEE/CVF Int. Conf. Comput. Vis., pp. 1905-1914, (2021)
  • [7] Zhou R., Susstrunk S., Kernel modeling super-resolution on real low-resolution images, Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pp. 2433-2443, (2019)
  • [8] Zhang K., Zuo W., Zhang L., Learning a single convolutional super-resolution network for multiple degradations, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3262-3271, (2018)
  • [9] Bell-Kligler S., Shocher A., Irani M., Blind super-resolution kernel estimation using an internal-GAN, Proc. Adv. Neural Inf. Process. Syst., 32, pp. 284-293, (2019)
  • [10] Gu J., Lu H., Zuo W., Dong C., Blind super-resolution with iterative kernel correction, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 1604-1613, (2019)