Single image super-resolution via low-rank tensor representation and hierarchical dictionary learning

被引:3
|
作者
Jing, Peiguang [1 ]
Guan, Weili [2 ]
Bai, Xu [1 ]
Guo, Hongbin [1 ]
Su, Yuting [1 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin, Peoples R China
[2] Hewlett Packard Enterprise Singapore, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Super-resolution; Low-rank representation; Tensor decomposition; Dictionary learning; RESOLUTION; ALGORITHM;
D O I
10.1007/s11042-019-08259-9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Super-resolution (SR) has been widely studied due to its importance in real applications and scenarios. In this paper, we focus on generating an SR image from a single low-resolution (LR) input image by employing the multi-resolution structures of an input image. By taking the LR image and its downsampled resolution (DR) and upsampled resolution (UR) versions as inputs, we propose a hierarchical dictionary learning approach to learn the latent UR-LR dictionary pair by preserving the internal structure coherence with the LR-DR dictionary pair. Note that an imposed restriction involved in this process is that the pairwise resolution images are jointly trained to obtain more compact patterns of image patches. In particular, to better explore the underlying structures of tensor data spanned by image patches, we propose a low-rank tensor approximation (LRTA) algorithm based on nuclear-norm regularization to embed input image patches into a low-dimensional space. Experimental results from publicly used images show that our proposed method achieves performance comparable with that of other state-of-the-art SR algorithms, even without using any external training databases.
引用
收藏
页码:11767 / 11785
页数:19
相关论文
共 50 条
  • [1] Single image super-resolution via low-rank tensor representation and hierarchical dictionary learning
    Peiguang Jing
    Weili Guan
    Xu Bai
    Hongbin Guo
    Yuting Su
    Multimedia Tools and Applications, 2020, 79 : 11767 - 11785
  • [2] Low-rank Representation for Single Image Super-resolution using Metric Learning
    Li, Shaohui
    Luo, Linkai
    Peng, Hong
    2017 12TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND EDUCATION (ICCSE 2017), 2017, : 415 - 418
  • [3] Single image super-resolution via adaptive sparse representation and low-rank constraint
    Li, Xuesong
    Cao, Guo
    Zhang, Youqiang
    Wang, Bisheng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 55 : 319 - 330
  • [4] Learning a Low Tensor-Train Rank Representation for Hyperspectral Image Super-Resolution
    Dian, Renwei
    Li, Shutao
    Fang, Leyuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2672 - 2683
  • [5] Cluster-based image super-resolution via jointly low-rank and sparse representation
    Han, Ningning
    Song, Zhanjie
    Li, Ying
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 38 : 175 - 185
  • [6] Low-Rank Neighbor Embedding for Single Image Super-Resolution
    Chen, Xiaoxuan
    Qi, Chun
    IEEE SIGNAL PROCESSING LETTERS, 2014, 21 (01) : 79 - 82
  • [7] Spatial-Spectral Structured Sparse Low-Rank Representation for Hyperspectral Image Super-Resolution
    Xue, Jize
    Zhao, Yong-Qiang
    Bu, Yuanyang
    Liao, Wenzhi
    Chan, Jonathan Cheung-Wai
    Philips, Wilfried
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 3084 - 3097
  • [8] GJTD-LR: A Trainable Grouped Joint Tensor Dictionary With Low-Rank Prior for Single Hyperspectral Image Super-Resolution
    Liu, Cong
    Fan, Zhihao
    Zhang, Guixu
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [9] Single Image Super-Resolution Based on Nonlocal Sparse and Low-Rank Regularization
    Liu, Chunhong
    Fang, Faming
    Xu, Yingying
    Shen, Chaomin
    PRICAI 2016: TRENDS IN ARTIFICIAL INTELLIGENCE, 2016, 9810 : 251 - 261
  • [10] Single Image Super-Resolution through Sparse Representation via Coupled Dictionary learning
    Patel, Rutul
    Thakar, Vishvjit
    Joshi, Rutvij
    INTERNATIONAL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2020, 66 (02) : 347 - 353