Generalized joint kernel regression and adaptive dictionary learning for single-image super-resolution

被引:11
|
作者
Huang, Chen [1 ]
Liang, Yicong [1 ]
Ding, Xiaoqing [1 ]
Fang, Chi [1 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Tsinghua Natl Lab Informat Sci & Technol, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
关键词
Single-image super-resolution; Face hallucination; Face recognition; Joint kernel regression; Dictionary learning; SPARSE REPRESENTATION; FACE; ALGORITHM;
D O I
10.1016/j.sigpro.2013.11.042
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper proposes a new approach to single-image super-resolution (SR) based on generalized adaptive joint kernel regression (G-AJKR) and adaptive dictionary learning. The joint regression prior aims to regularize the ill-posed reconstruction problem by exploiting local structural regularity and nonlocal self-similarity of images. It is composed of multiple locally generalized kernel regressors defined over similar patches found in the nonlocal range which are combined, thus simultaneously exploiting both image statistics in a natural manner. Each regression group is then weighted by a regional redundancy measure we propose to control their relative effects of regularization adaptively. This joint regression prior is further generalized to the range of multi-scales and rotations. For robustness, adaptive dictionary learning and dictionary-based sparsity prior are introduced to interact with this prior. We apply the proposed method to both general natural images and human face images (face hallucination), and for the latter we incorporate a new global face prior into SR reconstruction while preserving face discriminativity. In both cases, our method outperforms other related state-of-the-art methods qualitatively and quantitatively. Besides, our face hallucination method also outperforms the others when applied to face recognition applications. (C) 2013 Elsevier B.V. All rights reserved.
引用
收藏
页码:142 / 154
页数:13
相关论文
共 50 条
  • [21] Boosting Regression-Based Single-Image Super-Resolution Reconstruction
    Luo Shuang
    Huang Hui
    Zhang Kaibing
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (08)
  • [22] Learning adaptive interpolation kernels for fast single-image super resolution
    Hu, Xiyuan
    Peng, Silong
    Hwang, Wen-Liang
    SIGNAL IMAGE AND VIDEO PROCESSING, 2014, 8 (06) : 1077 - 1086
  • [23] Greedy regression in sparse coding space for single-image super-resolution
    Tang, Yi
    Yuan, Yuan
    Yan, Pingkun
    Li, Xuelong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2013, 24 (02) : 148 - 159
  • [24] Coarse-to-Fine Learning for Single-Image Super-Resolution
    Zhang, Kaibing
    Tao, Dacheng
    Gao, Xinbo
    Li, Xuelong
    Li, Jie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (05) : 1109 - 1122
  • [25] A Conspectus of Deep Learning Techniques for Single-Image Super-Resolution
    Pattern Recognition and Image Analysis, 2022, 32 : 11 - 32
  • [26] Learning adaptive interpolation kernels for fast single-image super resolution
    Xiyuan Hu
    Silong Peng
    Wen-Liang Hwang
    Signal, Image and Video Processing, 2014, 8 : 1077 - 1086
  • [27] Collaborative Representation Cascade for Single-Image Super-Resolution
    Zhang, Yongbing
    Zhang, Yulun
    Zhang, Jian
    Xu, Dong
    Fu, Yun
    Wang, Yisen
    Ji, Xiangyang
    Dai, Qionghai
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2019, 49 (05): : 845 - 860
  • [28] Pairwise Operator Learning for Patch-Based Single-Image Super-Resolution
    Tang, Yi
    Shao, Ling
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (02) : 994 - 1003
  • [29] Single-image super-resolution via low-rank matrix recovery and joint learning
    Chen, X.-X. (dada.yuasi@stu.xjtu.edu.cn), 1600, Science Press (37): : 1372 - 1379
  • [30] Single Image Super-Resolution through Sparse Representation via Coupled Dictionary learning
    Patel, Rutul
    Thakar, Vishvjit
    Joshi, Rutvij
    INTERNATIONAL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2020, 66 (02) : 347 - 353