Face super-resolution via iterative collaboration between multi-attention mechanism and landmark estimation

被引:0
|
作者
Shi, Chang-Teng [1 ]
Li, Meng-Jun [1 ]
An, Zhi Yong [1 ]
机构
[1] Shandong Technol & Business Univ, Coll Comp Sci & Technol, Yantai 264005, Peoples R China
关键词
Multi-scale information; Prior knowledge; Multi-attention; Iterative collaboration; IMAGE SUPERRESOLUTION;
D O I
10.1007/s40747-024-01673-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Face super-resolution technology can significantly enhance the resolution and quality of face images, which is crucial for applications such as surveillance, forensics, and face recognition. However, existing methods often fail to fully utilize multi-scale information and facial priors, resulting in poor recovery of facial structures in complex images. To address this issue, we propose a face super-resolution method based on iterative collaboration between a facial reconstruction network and a landmark estimation network. This method employs a Multi-Convolutional Attention Block for multi-scale feature extraction, and an Attention Fusion Block is introduced to enhance features using facial priors. Subsequently, features are further refined using a Residual Window Attention Group. Furthermore, the method involves iterative collaboration between the facial reconstruction network and the landmark estimation network. At each step, landmark priors are used to generate higher quality images, which are then utilized for improved landmark estimation, thereby gradually enhancing performance. Through evaluation of the standard 4x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times $$\end{document}, 8x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times $$\end{document}, and 16x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times $$\end{document} super-resolution tasks on the CelebA and Helen datasets, This method demonstrates strong performance and achieves competitive scores on SSIM, PSNR, and LPIPS metrics. Specifically, in the 8x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times $$\end{document} super-resolution experiment, the PSNR/SSIM/LPIPS on CelebA dataset is 27.68dB/ 0.8112/0.0866, outperforming existing state-of-the-art methods in terms of both accuracy and visual quality.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Deep Face Super-Resolution with Iterative Collaboration between Attentive Recovery and Landmark Estimation
    Ma, Cheng
    Jiang, Zhenyu
    Rao, Yongming
    Lu, Jiwen
    Zhou, Jie
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 5568 - 5577
  • [2] MSRFSR: Multi-Stage Refining Face Super-Resolution With Iterative Collaboration Between Face Recovery and Landmark Estimation
    Hajian, Amir
    Aramvith, Supavadee
    IEEE ACCESS, 2024, 12 : 56951 - 56972
  • [3] MSRFSR: Multi-Stage Refining Face Super-Resolution With Iterative Collaboration Between Face Recovery and Landmark Estimation (vol 12, pg 56951, 2024)
    Hajian, Amir
    Aramvith, Supavadee
    IEEE ACCESS, 2024, 12 : 157443 - 157443
  • [4] Face image super-resolution with an attention mechanism
    Chen X.
    Shen H.
    Bian Q.
    Wang Z.
    Tian X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (03): : 148 - 153
  • [5] Multi-attention augmented network for single image super-resolution
    Chen, Rui
    Zhang, Heng
    Liu, Jixin
    PATTERN RECOGNITION, 2022, 122
  • [6] Multi-attention fusion transformer for single-image super-resolution
    Li, Guanxing
    Cui, Zhaotong
    Li, Meng
    Han, Yu
    Li, Tianping
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [7] Gated Multi-Attention Feedback Network for Medical Image Super-Resolution
    Shang, Jianrun
    Zhang, Xue
    Zhang, Guisheng
    Song, Wenhao
    Chen, Jinyong
    Li, Qilei
    Gao, Mingliang
    ELECTRONICS, 2022, 11 (21)
  • [8] Multi-Attention Multi-Image Super-Resolution Transformer (MAST) for Remote Sensing
    Li, Jiaao
    Lv, Qunbo
    Zhang, Wenjian
    Zhu, Baoyu
    Zhang, Guiyu
    Tan, Zheng
    REMOTE SENSING, 2023, 15 (17)
  • [9] Face frontalization with deep GAN via multi-attention mechanism
    Cao, Jiaqian
    Chen, Zhenxue
    Zhang, Yujiao
    Sun, Luna
    Chen, Jiyang
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (05) : 1965 - 1973
  • [10] Face frontalization with deep GAN via multi-attention mechanism
    Jiaqian Cao
    Zhenxue Chen
    Yujiao Zhang
    Luna Sun
    Jiyang Chen
    Signal, Image and Video Processing, 2023, 17 : 1965 - 1973