Advancing Face Parsing in Real-World: Synergizing Self-Attention and Self-Distillation

被引:1
|
作者
Han, Seungeun [1 ]
Yoon, Hosub [1 ]
机构
[1] Elect & Telecommun Res Inst ETRI, Daejeon 34129, South Korea
关键词
Face recognition; Image edge detection; Feature extraction; Visualization; Task analysis; Adaptation models; Image segmentation; Face parsing; segmentation; self-attention; self-distillation; occlusion-aware; real-world;
D O I
10.1109/ACCESS.2024.3368530
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Face parsing, the segmentation of facial components at the pixel level, is pivotal for comprehensive facial analysis. However, previous studies encountered challenges, showing reduced performance in areas with small or thin classes like necklaces and earrings, and struggling to adapt to occlusion scenarios such as masks, glasses, caps or hands. To address these issues, this study proposes a robust face parsing technique through the strategic integration of self-attention and self-distillation methods. The self-attention module enhances contextual information, enabling precise feature identification for each facial element. Multi-task learning for edge detection, coupled with a specialized loss function focusing on edge regions, elevates the understanding of fine structures and contours. Additionally, the application of self-distillation for fine-tuning proves highly efficient, producing refined parsing results while maintaining high performance in scenarios with limited labels and ensuring robust generalization. The integration of self-attention and self-distillation techniques addresses challenges of previous studies, particularly in handling small or thin classes. This strategic fusion enhances overall performance, achieving computational efficiency, and aligns with the latest trends in this research area. The proposed approach attains a Mean F1 score of 88.18% on the CelebAMask-HQ dataset, marking a significant advancement in face parsing with state-of-the-art performance. Even in challenging occlusion areas like hands and masks, it demonstrates a remarkable F1 score of over 99%, showcasing robust face parsing capabilities in real-world environments.
引用
收藏
页码:29812 / 29823
页数:12
相关论文
共 40 条
  • [1] Efficient Semantic Segmentation via Self-Attention and Self-Distillation
    An, Shumin
    Liao, Qingmin
    Lu, Zongqing
    Xue, Jing-Hao
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) : 15256 - 15266
  • [2] Real-World Non-Homogeneous Haze Removal by Sliding Self-Attention Wavelet Network
    Feng, Yuxin
    Meng, Xiaozhe
    Zhou, Fan
    Lin, Weisi
    Su, Zhuo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (10) : 5470 - 5485
  • [3] Self-Distillation into Self-Attention Heads for Improving Transformer-based End-to-End Neural Speaker Diarization
    Jeoung, Ye-Rin
    Choi, Jeong-Hwan
    Seong, Ju-Seok
    Kyung, JeHyun
    Chang, Joon-Hyuk
    INTERSPEECH 2023, 2023, : 3197 - 3201
  • [4] CATFace: Cross-Attribute-Guided Transformer With Self-Attention Distillation for Low-Quality Face Recognition
    Alipour Talemi, Niloufar
    Kashiani, Hossein
    Nasrabadi, Nasser M.
    IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2024, 6 (01): : 132 - 146
  • [5] Compact Cloud Detection with Bidirectional Self-Attention Knowledge Distillation
    Chai, Yajie
    Fu, Kun
    Sun, Xian
    Diao, Wenhui
    Yan, Zhiyuan
    Feng, Yingchao
    Wang, Lei
    REMOTE SENSING, 2020, 12 (17)
  • [6] The Multimodal Scene Recognition Method Based on Self-Attention and Distillation
    Sun, Ning
    Xu, Wei
    Liu, Jixin
    Chai, Lei
    Sun, Haian
    IEEE MULTIMEDIA, 2024, 31 (04) : 25 - 36
  • [7] Structural Dependence Learning Based on Self-attention for Face Alignment
    Biying Li
    Zhiwei Liu
    Wei Zhou
    Haiyun Guo
    Xin Wen
    Min Huang
    Jinqiao Wang
    Machine Intelligence Research, 2024, 21 : 514 - 525
  • [8] Leverage External Knowledge and Self-attention for Chinese Semantic Dependency Graph Parsing
    Liu, Dianqing
    Zhang, Lanqiu
    Shao, Yanqiu
    Sun, Junzhao
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2021, 28 (02) : 447 - 458
  • [9] Masked face recognition with convolutional visual self-attention network
    Ge, Yiming
    Liu, Hui
    Du, Junzhao
    Li, Zehua
    Wei, Yuheng
    NEUROCOMPUTING, 2023, 518 : 496 - 506
  • [10] Structural Dependence Learning Based on Self-attention for Face Alignment
    Li, Biying
    Liu, Zhiwei
    Zhou, Wei
    Guo, Haiyun
    Wen, Xin
    Huang, Min
    Wang, Jinqiao
    MACHINE INTELLIGENCE RESEARCH, 2024, 21 (03) : 514 - 525