E2F-Net: Eyes-to-face inpainting via StyleGAN latent space

被引:5
作者
Hassanpour, Ahmad [1 ]
Jamalbafrani, Fatemeh [2 ]
Yang, Bian [1 ]
Raja, Kiran [4 ]
Veldhuis, Raymond [1 ]
Fierrez, Julian [3 ]
机构
[1] NTNU, Dept Informat Secur & Commun Technol, Gjovik, Norway
[2] Sharif Univ Technol, Dept Elect Engn, Tehran, Iran
[3] Univ Autonoma Madrid, Sch Engn, Madrid, Spain
[4] NTNU, Dept Comp Sci, Gjovik, Norway
关键词
Eyes-to-face; Face inpainting; Face reconstruction; GAN latent space; StyleGAN; GENERATIVE ADVERSARIAL NETWORKS; IMAGE;
D O I
10.1016/j.patcog.2024.110442
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Face inpainting, the technique of restoring missing or damaged regions in facial images, is pivotal for applications like face recognition in occluded scenarios and image analysis with poor -quality captures. This process not only needs to produce realistic visuals but also preserve individual identity characteristics. The aim of this paper is to inpaint a face given periocular region (eyes -to -face) through a proposed new Generative Adversarial Network (GAN)-based model called Eyes -to -Face Network (E2F-Net). The proposed approach extracts identity and non-identity features from the periocular region using two dedicated encoders have been used. The extracted features are then mapped to the latent space of a pre -trained StyleGAN generator to benefit from its state-of-theart performance and its rich, diverse and expressive latent space without any additional training. We further improve the StyleGAN's output to find the optimal code in the latent space using a new optimization for GAN inversion technique. Our E2F-Net requires a minimum training process reducing the computational complexity as a secondary benefit. Through extensive experiments, we show that our method successfully reconstructs the whole face with high quality, surpassing current techniques, despite significantly less training and supervision efforts. We have generated seven eyes -to -face datasets based on well-known public face datasets for training and verifying our proposed methods. The code and datasets are publicly available.1
引用
收藏
页数:16
相关论文
共 54 条
  • [1] Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?
    Abdal, Rameen
    Qin, Yipeng
    Wonka, Peter
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4431 - 4440
  • [2] Alaluf Yuval, 2021, HyperStyle: styleGAN inversion with HyperNetworks for real image editing
  • [3] Facial masks and soft-biometrics: Leveraging face recognition CNNs for age and gender prediction on mobile ocular images
    Alonso-Fernandez, Fernando
    Hernandez-Diaz, Kevin
    Ramis, Silvia
    Perales, Francisco J.
    Bigun, Josef
    [J]. IET BIOMETRICS, 2021, 10 (05) : 562 - 580
  • [4] Fast re-OBJ: real-time object re-identification in rigid scenes
    Bayraktar, Ertugrul
    Wang, Yiming
    DelBue, Alessio
    [J]. MACHINE VISION AND APPLICATIONS, 2022, 33 (06)
  • [5] How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)
    Bulat, Adrian
    Tzimiropoulos, Georgios
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1021 - 1030
  • [6] PiiGAN: Generative Adversarial Networks for Pluralistic Image Inpainting
    Cai, Weiwei
    Wei, Zhanguo
    [J]. IEEE ACCESS, 2020, 8 (08): : 48451 - 48463
  • [7] EDBGAN: Image Inpainting via an Edge-Aware Dual Branch Generative Adversarial Network
    Chen, Minyu
    Liu, Zhi
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 842 - 846
  • [8] Attentional coarse -and -fine generative adversarial networks for image inpainting
    Chen, Minyu
    Liu, Zhi
    Ye, Linwei
    Wang, Yang
    [J]. NEUROCOMPUTING, 2020, 405 : 259 - 269
  • [9] Detecting and Removing Text in the Wild
    Cho, Junho
    Yun, Sangdoo
    Han, Dongyoon
    Heo, Byeongho
    Choi, Jin Young
    [J]. IEEE ACCESS, 2021, 9 : 123313 - 123323
  • [10] ArcFace: Additive Angular Margin Loss for Deep Face Recognition
    Deng, Jiankang
    Guo, Jia
    Xue, Niannan
    Zafeiriou, Stefanos
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4685 - 4694