Facial Expression Recognition Based on Vision Transformer with Hybrid Local Attention

被引:1
|
作者
Tian, Yuan [1 ]
Zhu, Jingxuan [1 ]
Yao, Huang [1 ]
Chen, Di [1 ]
机构
[1] Cent China Normal Univ, Fac Artificial Intelligence Educ, Wuhan 430079, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 15期
关键词
facial expression recognition; attention; vision transformer;
D O I
10.3390/app14156471
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Facial expression recognition has wide application prospects in many occasions. Due to the complexity and variability of facial expressions, facial expression recognition has become a very challenging research topic. This paper proposes a Vision Transformer expression recognition method based on hybrid local attention (HLA-ViT). The network adopts a dual-stream structure. One stream extracts the hybrid local features and the other stream extracts the global contextual features. These two streams constitute a global-local fusion attention. The hybrid local attention module is proposed to enhance the network's robustness to face occlusion and head pose variations. The convolutional neural network is combined with the hybrid local attention module to obtain feature maps with local prominent information. Robust features are then captured by the ViT from the global perspective of the visual sequence context. Finally, the decision-level fusion mechanism fuses the expression features with local prominent information, adding complementary information to enhance the network's recognition performance and robustness against interference factors such as occlusion and head posture changes in natural scenes. Extensive experiments demonstrate that our HLA-ViT network achieves an excellent performance with 90.45% on RAF-DB, 90.13% on FERPlus, and 65.07% on AffectNet.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Patch attention convolutional vision transformer for facial expression recognition with occlusion
    Liu, Chang
    Hirota, Kaoru
    Dai, Yaping
    INFORMATION SCIENCES, 2023, 619 : 781 - 794
  • [2] Facial Expression Recognition Based on Squeeze Vision Transformer
    Kim, Sangwon
    Nam, Jaeyeal
    Ko, Byoung Chul
    SENSORS, 2022, 22 (10)
  • [3] Vision Transformer With Attentive Pooling for Robust Facial Expression Recognition
    Xue, Fanglei
    Wang, Qiangchang
    Tan, Zichang
    Ma, Zhongsong
    Guo, Guodong
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 3244 - 3256
  • [4] Self-supervised vision transformer-based few-shot learning for facial expression recognition
    Chen, Xuanchi
    Zheng, Xiangwei
    Sun, Kai
    Liu, Weilong
    Zhang, Yuang
    INFORMATION SCIENCES, 2023, 634 : 206 - 226
  • [5] Collaborative Attention Transformer on facial expression recognition under partial occlusion
    Luo, Yan
    Shao, Jie
    Yang, Runxia
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (02)
  • [6] VISION TRANSFORMER EQUIPPED WITH NEURAL RESIZER ON FACIAL EXPRESSION RECOGNITION TASK
    Hwang, Hyeonbin
    Kim, Soyeon
    Park, Wei-Jin
    Seo, Jiho
    Ko, Kyungtae
    Yeo, Hyeon
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2614 - 2618
  • [7] Facial Expression Recognition Based on Fine-Tuned Channel-Spatial Attention Transformer
    Yao, Huang
    Yang, Xiaomeng
    Chen, Di
    Wang, Zhao
    Tian, Yuan
    SENSORS, 2023, 23 (15)
  • [8] Face-mask-aware Facial Expression Recognition based on Face Parsing and Vision Transformer
    Yang, Bo
    Wu, Jianming
    Ikeda, Kazushi
    Hattori, Gen
    Sugano, Masaru
    Iwasawa, Yusuke
    Matsuo, Yutaka
    PATTERN RECOGNITION LETTERS, 2022, 164 : 173 - 182
  • [9] Enhanced Hybrid Vision Transformer with Multi-Scale Feature Integration and Patch Dropping for Facial Expression Recognition
    Li, Nianfeng
    Huang, Yongyuan
    Wang, Zhenyan
    Fan, Ziyao
    Li, Xinyuan
    Xiao, Zhiguo
    SENSORS, 2024, 24 (13)
  • [10] Facial Expression Recognition Based on Dual Scale Hybrid Attention mechanism
    Peng Yongjia
    Xin, Jin
    2023 5TH INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTICS, ICCR, 2023, : 240 - 244