A UNIFIED FRAMEWORK FOR MASKED AND MASK-FREE FACE RECOGNITION VIA FEATURE RECTIFICATION

被引:2
作者
Hao, Shaozhe [1 ]
Chen, Chaofeng [2 ]
Chen, Zhenfang [3 ]
Wong, Kwan-Yee K. [1 ]
机构
[1] Univ Hong Kong, Hong Kong, Peoples R China
[2] Nanyang Technol Univ, Singapore, Singapore
[3] MIT, IBM Watson Lab, Cambridge, MA USA
来源
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP | 2022年
关键词
Face Recognition; Masked Face; Feature Rectification; COVID-19;
D O I
10.1109/ICIP46576.2022.9897292
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Face recognition under ideal conditions is now considered a well-solved problem with advances in deep learning. Recognizing faces under occlusion, however, still remains a challenge. Existing techniques often fail to recognize faces with both the mouth and nose covered by a mask, which is now very common under the COVID-19 pandemic. Common approaches to tackle this problem include 1) discarding information from the masked regions during recognition and 2) restoring the masked regions before recognition. Very few works considered the consistency between features extracted from masked faces and from their mask-free counterparts. This resulted in models trained for recognizing masked faces often showing degraded performance on mask-free faces. In this paper, we propose a unified framework, named Face Feature Rectification Network (FFR-Net), for recognizing both masked and mask-free faces alike. We introduce rectification blocks to rectify features extracted by a state-of-the-art recognition model, in both spatial and channel dimensions, to minimize the distance between a masked face and its mask-free counterpart in the rectified feature space. Experiments show that our unified framework can learn a rectified feature space for recognizing both masked and mask-free faces effectively, achieving state-of-the-art results. Project code: https://github.com/haoosz/FFR-Net
引用
收藏
页码:726 / 730
页数:5
相关论文
共 21 条
  • [1] Anwar A, 2020, Arxiv, DOI arXiv:2008.11104
  • [2] Robust Deep Auto-encoder for Occluded Face Recognition
    Cheng, Lele
    Wang, Jinjun
    Gong, Yihong
    Hou, Qiqi
    [J]. MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 1099 - 1102
  • [3] ArcFace: Additive Angular Margin Loss for Deep Face Recognition
    Deng, Jiankang
    Guo, Jia
    Xue, Niannan
    Zafeiriou, Stefanos
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4685 - 4694
  • [4] Generative Adversarial Networks
    Goodfellow, Ian
    Pouget-Abadie, Jean
    Mirza, Mehdi
    Xu, Bing
    Warde-Farley, David
    Ozair, Sherjil
    Courville, Aaron
    Bengio, Yoshua
    [J]. COMMUNICATIONS OF THE ACM, 2020, 63 (11) : 139 - 144
  • [5] MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition
    Guo, Yandong
    Zhang, Lei
    Hu, Yuxiao
    He, Xiaodong
    Gao, Jianfeng
    [J]. COMPUTER VISION - ECCV 2016, PT III, 2016, 9907 : 87 - 102
  • [6] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
  • [7] Huang G. B, 2007, Tech. Rep. 07-49
  • [8] An empirical study of the impact of masks on face recognition
    Jeevan, Govind
    Zacharias, Geevar C.
    Nair, Madhu S.
    Rajan, Jeny
    [J]. PATTERN RECOGNITION, 2022, 122
  • [9] The MegaFace Benchmark: 1 Million Faces for Recognition at Scale
    Kemelmacher-Shlizerman, Ira
    Seitz, Steven M.
    Miller, Daniel
    Brossard, Evan
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 4873 - 4882
  • [10] Kingma D.P., 2015, 3 INT C LEARNING REP