CATFace: Cross-Attribute-Guided Transformer With Self-Attention Distillation for Low-Quality Face Recognition

被引:1
|
作者
Alipour Talemi, Niloufar [1 ]
Kashiani, Hossein [1 ]
Nasrabadi, Nasser M. [1 ]
机构
[1] West Virginia Univ, Lane Dept Comp Sci & Elect Engn, Morgantown, WV 26506 USA
来源
IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE | 2024年 / 6卷 / 01期
关键词
Face recognition; Task analysis; Training; Facial features; Neural networks; Transformers; Robustness; soft biometric attributes; knowledge distillation; self-attention mechanism; feature fusion;
D O I
10.1109/TBIOM.2023.3349218
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although face recognition (FR) has achieved great success in recent years, it is still challenging to accurately recognize faces in low-quality images due to the obscured facial details. Nevertheless, it is often feasible to make predictions about specific soft biometric (SB) attributes, such as gender, and baldness even in dealing with low-quality images. In this paper, we propose a novel multi-branch neural network that leverages SB attribute information to boost the performance of FR. To this end, we propose a cross-attribute-guided transformer fusion (CATF) module that effectively captures the long-range dependencies and relationships between FR and SB feature representations. The synergy created by the reciprocal flow of information in the dual cross-attention operations of the proposed CATF module enhances the performance of FR. Furthermore, we introduce a novel self-attention distillation framework that effectively highlights crucial facial regions, such as landmarks by aligning low-quality images with those of their high-quality counterparts in the feature space. The proposed self-attention distillation regularizes our network to learn a unified qualityinvariant feature representation in unconstrained environments. We conduct extensive experiments on various FR benchmarks varying in quality. Experimental results demonstrate the superiority of our FR method compared to state-of-the-art FR studies.
引用
收藏
页码:132 / 146
页数:15
相关论文
共 6 条
  • [1] Masked face recognition based on knowledge distillation and convolutional self-attention network
    Wan, Weiguo
    Wen, Runlin
    Yao, Li
    Yang, Yong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, : 2269 - 2284
  • [2] Saliency guided self-attention network for pedestrian attribute recognition in surveillance scenarios
    Li N.
    Wu Y.
    Liu Y.
    Li D.
    Gao J.
    Journal of China Universities of Posts and Telecommunications, 2022, 29 (05): : 21 - 29
  • [3] STDP-Net: Improved Pedestrian Attribute Recognition Using Swin Transformer and Semantic Self-Attention
    Lee, Geonu
    Cho, Jungchan
    IEEE ACCESS, 2022, 10 : 82656 - 82667
  • [4] Texture-Guided Transfer Learning for Low-Quality Face Recognition
    Zhang, Meng
    Liu, Rujie
    Deguchi, Daisuke
    Murase, Hiroshi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 95 - 107
  • [5] Degradation model and attention guided distillation approach for low resolution face recognition
    Ullah, Mohsin
    Taj, Imtiaz Ahmad
    Raza, Rana Hammad
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 243
  • [6] Sleep-CMKD: Self-Attention CNN/Transformer Cross-Model Knowledge Distillation for Automatic Sleep Staging
    Kim, Hyounggyu
    Kim, Moogyeong
    Chung, Wonzoo
    2023 11TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI, 2023,