An End-to-End Deep Model With Discriminative Facial Features for Facial Expression Recognition

被引:17
|
作者
Liu, Jun [1 ]
Wang, Hongxia [2 ]
Feng, Yanjun [2 ]
机构
[1] Shenyang Ligong Univ, Sch Automat & Elect Engn, Shenyang 110159, Peoples R China
[2] Shenyang Ligong Univ, Sch Informat Sci & Engn, Shenyang 110159, Peoples R China
关键词
Face recognition; Feature extraction; Data models; Training; Deep learning; Facial features; Data mining; feature extraction; data enhancement; CNN; FACE RECOGNITION; ENHANCEMENT; SYSTEMS; CNN;
D O I
10.1109/ACCESS.2021.3051403
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the complex challenges of the environment and emotion expressions, most facial expression recognition systems cannot achieve a high recognition rate. More discriminative features can describe facial expressions more accurately, so facial feature extraction is the key technology for facial expression recognition. In this article, an effective end-to-end deep model is proposed to improve the accuracy of face recognition. Considering the importance of data pre-processing (very few studies have focused on this process), first, a data enhancement method is proposed to locate the range of the face target and enhance the image contrast. Next, to obtain further discriminative features, a hybrid feature representation method is proposed, in which four typical feature extraction method are combined. After that, an effective deep model is designed to train and test the samples which can obtain the optimal parameters with less computation cost. Ablation study results show that the proposed hybrid feature representation method can help improve recognition accuracy. Finally, to comprehensively evaluate the performance of the proposed model, a series of experiments are conducted on three benchmark datasets. The recognition rate is achieved 94.5%, 98.6%, and 97.2% for FER2013, AR dataset, and CK+ dataset, respectively.
引用
收藏
页码:12158 / 12166
页数:9
相关论文
共 50 条
  • [31] Variation of deep features analysis for facial expression recognition system
    Nazir Shabbir
    Ranjeet Kumar Rout
    Multimedia Tools and Applications, 2023, 82 : 11507 - 11522
  • [32] FACIAL EXPRESSION RECOGNITION IN THE WILD USING RICH DEEP FEATURES
    Karali, Abubakrelsedik
    Bassiouny, Ahmad
    El-Saban, Motaz
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 3442 - 3446
  • [33] Variation of deep features analysis for facial expression recognition system
    Shabbir, Nazir
    Rout, Ranjeet Kumar
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (08) : 11507 - 11522
  • [34] End-to-End Facial Image Compression with Integrated Semantic Distortion Metric
    He, Tianyu
    Chen, Zhibo
    2018 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP), 2018,
  • [35] Facial reanimation with end-to-end hypoglossofacial anastomosis: 20 years' experience
    Catli, T.
    Bayazit, Y. A.
    Gokdogan, O.
    Goksu, N.
    JOURNAL OF LARYNGOLOGY AND OTOLOGY, 2010, 124 (01): : 23 - 25
  • [36] Discriminative Deep Feature Learning for Facial Emotion Recognition
    Dinh Viet Sang
    Le Tran Bao Cuong
    Pham Thai Ha
    2018 1ST INTERNATIONAL CONFERENCE ON MULTIMEDIA ANALYSIS AND PATTERN RECOGNITION (MAPR), 2018,
  • [37] Facial Expression Recognition Using Pose-Guided Face Alignment and Discriminative Features Based on Deep Learning
    Liu, Jun
    Feng, Yanjun
    Wang, Hongxia
    IEEE ACCESS, 2021, 9 : 69267 - 69277
  • [38] End-to-End Deep Learning Speech Recognition Model for Silent Speech Challenge
    Kimura, Naoki
    Su, Zixiong
    Saeki, Takaaki
    INTERSPEECH 2020, 2020, : 1025 - 1026
  • [39] Deep Facial Expression Recognition Using Xception Model
    Ouhammou, Mohamed
    Ababou, Nabil
    Baslam, Mohamed
    Aouragh, Si Lhoussain
    ARABIC LANGUAGE PROCESSING: FROM THEORY TO PRACTICE, ICALP 2023, PT II, 2025, 2340 : 209 - 220
  • [40] End-to-End Deep Learning for Driver Distraction Recognition
    Koesdwiady, Arief
    Bedawi, Safaa M.
    Ou, Chaojie
    Karray, Fakhri
    IMAGE ANALYSIS AND RECOGNITION, ICIAR 2017, 2017, 10317 : 11 - 18