FRNet: Improving Face De-occlusion via Feature Reconstruction

被引:0
|
作者
Du, Shanshan [1 ]
Zhang, Liyan [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Nanjing 210016, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI | 2024年 / 14435卷
基金
中国国家自然科学基金;
关键词
Face de-occlusion; Image inpainting; Deep learning;
D O I
10.1007/978-981-99-8552-4_25
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Face de-occlusion is essential to improve the accuracy of face-related tasks. However, most existing methods only focus on single occlusion scenarios, rendering them sub-optimal for multiple occlusions. To alleviate this problem, we propose a novel framework for face deocclusion called FRNet, which is based on feature reconstruction. The proposed FRNet can automatically detect and remove single or multiple occlusions through the predict-extract-inpaint approach, making it a universal solution to deal with multiple occlusions. In this paper, we propose a two-stage occlusion extractor and a two-stage face generator. The former utilizes the predicted occlusion positions to get coarse occlusion masks which are subsequently fine-tuned by the refinement module to tackle complex occlusion scenarios in the real world. The latter utilizes the predicted face structures to reconstruct global structures, and then uses information from neighboring areas and corresponding features to refine important areas, so as to address the issues of structural deficiencies and feature disharmony in the generated face images. We also introduce a gender-consistency loss and an identity loss to improve the attribute recovery accuracy of images. Furthermore, to address the limitations of existing datasets for face de-occlusion, we introduce a new synthetic face dataset including both single and multiple occlusions, which effectively facilitates the model training. Extensive experimental results demonstrate the superiority of the proposed FRNet compared to state-of-the-art methods.
引用
收藏
页码:313 / 326
页数:14
相关论文
共 50 条
  • [1] Comparative Study Based on De-Occlusion and Reconstruction of Face Images in Degraded Conditions
    Ouannes, Laila
    Ben Khalifa, Anouar
    Ben Amara, Najoua Essoukri
    TRAITEMENT DU SIGNAL, 2021, 38 (03) : 573 - 585
  • [2] SILP-autoencoder for face de-occlusion
    Sun, Dengdi
    Xie, Wandong
    Ding, Zhuanlian
    Tang, Jin
    NEUROCOMPUTING, 2022, 485 : 47 - 56
  • [3] OCCLUSION-AWARE GAN FOR FACE DE-OCCLUSION IN THE WILD
    Dong, Jiayuan
    Zhang, Liyan
    Zhang, Hanwang
    Liu, Weichen
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [4] Robust face alignment by cascaded regression and de-occlusion
    Wan, Jun
    Li, Jing
    Lai, Zhihui
    Du, Bo
    Zhang, Lefei
    NEURAL NETWORKS, 2020, 123 : 261 - 272
  • [5] Face De-Occlusion With Deep Cascade Guidance Learning
    Zhang, Ni
    Liu, Nian
    Han, Junwei
    Wan, Kaiyuan
    Shao, Ling
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 3217 - 3229
  • [6] Semi-Supervised Natural Face De-Occlusion
    Cai, Jiancheng
    Han, Hu
    Cui, Jiyun
    Chen, Jie
    Liu, Li
    Zhou, S. Kevin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 1044 - 1057
  • [7] Image-to-Image Translation Based Face De-Occlusion
    Maharjan, Rahul S.
    Din, Nizam Ud
    Yi, Juneho
    TWELFTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2020), 2020, 11519
  • [8] Robust LSTM-Autoencoders for Face De-Occlusion in the Wild
    Zhao, Fang
    Feng, Jiashi
    Zhao, Jian
    Yang, Wenhan
    Yan, Shuicheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (02) : 778 - 790
  • [9] Look Through Masks: Towards Masked Face Recognition with De-Occlusion Distillation
    Li, Chenyu
    Ge, Shiming
    Zhang, Daichi
    Li, Jia
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 3016 - 3024
  • [10] OASG-Net: Occlusion Aware and Structure-Guided Network for Face De-Occlusion
    Fu, Yuewei
    Liang, Buyun
    Wang, Zhongyuan
    Huang, Baojin
    Lu, Tao
    Liang, Chao
    Liao, Jing
    IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2025, 7 (02): : 234 - 245