Deep Face Super-Resolution with Iterative Collaboration between Attentive Recovery and Landmark Estimation

被引:138
|
作者
Ma, Cheng [1 ,2 ,3 ]
Jiang, Zhenyu [1 ]
Rao, Yongming [1 ,2 ,3 ]
Lu, Jiwen [1 ,2 ,3 ]
Zhou, Jie [1 ,2 ,3 ,4 ]
机构
[1] Tsinghua Univ, Dept Automat, Beijing, Peoples R China
[2] State Key Lab Intelligent Technol & Syst, Beijing, Peoples R China
[3] Beijing Natl Res Ctr Informat Sci & Technol, Beijing, Peoples R China
[4] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Beijing, Peoples R China
来源
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2020年
基金
中国国家自然科学基金;
关键词
HALLUCINATION; IMAGES;
D O I
10.1109/CVPR42600.2020.00561
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent works based on deep learning and facial priors have succeeded in super-resolving severely degraded facial images. However, the prior knowledge is not fully exploited in existing methods, since facial priors such as landmark and component maps are always estimated by low-resolution or coarsely super-resolved images, which may be inaccurate and thus affect the recovery performance. In this paper, we propose a deep face super-resolution (FSR) method with iterative collaboration between two recurrent networks which focus on facial image recovery and landmark estimation respectively. In each recurrent step, the recovery branch utilizes the prior knowledge of landmarks to yield higher-quality images which facilitate more accurate landmark estimation in turn. Therefore, the iterative information interaction between two processes boosts the performance of each other progressively. Moreover, a new attentive fusion module is designed to strengthen the guidance of landmark maps, where facial components are generated individually and aggregated attentively for better restoration. Quantitative and qualitative experimental results show the proposed method significantly outperforms state-of-the-art FSR methods in recovering high-quality face images.
引用
收藏
页码:5568 / 5577
页数:10
相关论文
共 50 条
  • [21] Robust super-resolution of face images by iterative compensating neighborhood relationships
    Park, Sung Won
    Savvides, Marios
    2007 BIOMETRICS SYMPOSIUM, 2007, : 13 - 17
  • [22] Deep Learning-based Face Super-resolution: A Survey
    Jiang, Junjun
    Wang, Chenyang
    Liu, Xianming
    Ma, Jiayi
    ACM COMPUTING SURVEYS, 2023, 55 (01)
  • [23] A Bayesian estimation approach to super-resolution reconstruction for face images
    Huang, Hua
    Fan, Xin
    Qi, Chun
    Zhu, Shihua
    ADVANCES IN MACHINE VISION, IMAGE PROCESSING, AND PATTERN ANALYSIS, 2006, 4153 : 406 - 415
  • [24] Generalized face super-resolution
    Jia, Kui
    Gong, Shaogang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2008, 17 (06) : 873 - 886
  • [25] ICNet: Joint Alignment and Reconstruction via Iterative Collaboration for Video Super-Resolution
    Leng, Jiaxu
    Wang, Jia
    Gao, Xinbo
    Hu, Bo
    Gan, Ji
    Gao, Chenqiang
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6675 - 6684
  • [26] Deep learning-based blind image super-resolution with iterative kernel reconstruction and noise estimation
    Ates, Hasan F.
    Yildirim, Suleyman
    Gunturk, Bahadir K.
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 233
  • [27] Deep Iterative Residual Convolutional Network for Single Image Super-Resolution
    Umer, Rao M.
    Foresti, G. L.
    Micheloni, C.
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1852 - 1858
  • [28] A lorentzian stochastic estimation for a robust and iterative multiframe super-resolution reconstruction
    Patanavijit, V.
    Jitapunkul, S.
    TENCON 2006 - 2006 IEEE REGION 10 CONFERENCE, VOLS 1-4, 2006, : 516 - +
  • [29] An iterative approach to image super-resolution
    Bannore, Vivek
    Swierkowski, Leszek
    INTELLIGENT INFORMATION PROCESSING III, 2006, 228 : 473 - +
  • [30] Iterative Network for Image Super-Resolution
    Liu, Yuqing
    Wang, Shiqi
    Zhang, Jian
    Wang, Shanshe
    Ma, Siwei
    Gao, Wen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 2259 - 2272