High-Magnification Super-Resolution Reconstruction of Image with Multi-Task Learning

被引:3
|
作者
Li, Yanghui [1 ]
Zhu, Hong [1 ]
Yu, Shunyuan [2 ]
机构
[1] Xian Univ Technol, Fac Automat & Informat Engn, Xian 710048, Peoples R China
[2] Ankang Univ, Inst Elect & Informat Engn, Ankang 725000, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-task learning; high-magnification; single-image super-resolution; convolutional neural network; QUALITY ASSESSMENT; NETWORK;
D O I
10.3390/electronics11091412
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Single-image super-resolution technology has made great progress with the development of the convolutional neural network, but most of the current super-resolution methods do not attempt high-magnification image super-resolution reconstruction; only reconstruction with x 2, x 3, x 4 magnification is carried out for low-magnification down-sampled images without serious degradation. Based on this, this paper proposed a single-image high-magnification super-resolution method, which extends the scale factor of image super-resolution to high magnification. By introducing the idea of multi-task learning, the process of the high-magnification image super-resolution process is decomposed into different super-resolution tasks. Different tasks are trained with different data, and network models for different tasks can be obtained. Through the cascade reconstruction of different task network models, a low-resolution image accumulates reconstruction advantages layer by layer, and we obtain the final high-magnification super-resolution reconstruction results. The proposed method shows better performance in quantitative and qualitative comparison on the benchmark dataset than other super-resolution methods.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Multi-task learning with self-learning weight for image denoising
    Xiang, Qian
    Tang, Yong
    Zhou, Xiangyang
    Journal of Engineering and Applied Science, 2024, 71 (01):
  • [32] Single satellite imagery simultaneous super-resolution and colorization using multi-task deep neural networks
    Liu, Heng
    Fu, Zilin
    Han, Jungong
    Shao, Ling
    Liu, Hongshen
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 53 : 20 - 30
  • [33] Task Transformer Network for Joint MRI Reconstruction and Super-Resolution
    Feng, Chun-Mei
    Yan, Yunlu
    Fu, Huazhu
    Chen, Li
    Xu, Yong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VI, 2021, 12906 : 307 - 317
  • [34] Nested Multi-Axis Learning Network for Single Image Super-Resolution
    Xiao, Xianwei
    Zhong, Baojiang
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2022, 13631 : 490 - 503
  • [35] Image super-resolution reconstruction based on multi-scale dual-attention
    Li, Hong-an
    Wang, Diao
    Zhang, Jing
    Li, Zhanli
    Ma, Tian
    CONNECTION SCIENCE, 2023, 35 (01)
  • [36] Multi-task Deep Learning for Image Understanding
    Yu, Bo
    Lane, Ian
    2014 6TH INTERNATIONAL CONFERENCE OF SOFT COMPUTING AND PATTERN RECOGNITION (SOCPAR), 2014, : 37 - 42
  • [37] Optimized highway deep learning network for fast single image super-resolution reconstruction
    Viet Khanh Ha
    Jinchang Ren
    Xinying Xu
    Wenzhi Liao
    Sophia Zhao
    Jie Ren
    Gaowei Yan
    Journal of Real-Time Image Processing, 2020, 17 : 1961 - 1970
  • [38] A Multi-purpose Convolutional Neural Network for Simultaneous Super-Resolution and High Dynamic Range Image Reconstruction
    Kim, Soo Ye
    Kim, Munchurl
    COMPUTER VISION - ACCV 2018, PT III, 2019, 11363 : 379 - 394
  • [39] Optimized highway deep learning network for fast single image super-resolution reconstruction
    Ha, Viet Khanh
    Ren, Jinchang
    Xu, Xinying
    Liao, Wenzhi
    Zhao, Sophia
    Ren, Jie
    Yan, Gaowei
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2020, 17 (06) : 1961 - 1970
  • [40] A multi-scale mixed convolutional network for infrared image super-resolution reconstruction
    Du, Yan-Bin
    Sun, Hong-Mei
    Zhang, Bin
    Cui, Zhe
    Jia, Rui-Sheng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (27) : 41895 - 41911