Dominant and complementary emotion recognition using hybrid recurrent neural network

被引:3
作者
Jiddah, Salman Mohammed [1 ]
Yurtkan, Kamil [1 ,2 ]
机构
[1] Cyprus Int Univ, Dept Comp Engn, Nicosia, Cyprus
[2] Cyprus Int Univ, Artificial Intelligence Applicat & Res Ctr, Nicosia, Cyprus
关键词
Compound emotion recognition; Facial expression recognition; Recurrent neural network; Dominant and complementary emotion recognition; Deep learning; Facial analysis; COMPOUND FACIAL EXPRESSIONS;
D O I
10.1007/s11760-023-02563-6
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Human emotion recognition has been a complex research problem due to the diversity in faces of individuals, especially in scenarios of compound emotions. Compound emotions are a combination or inclusion of two basic emotions; dominant and complementary. The recognition of compound emotions has been reported in several studies to be challenging, especially in comparison with the recognition of basic emotions. The paper proposes the use of a hybrid recurrent neural network to perform compound expression recognition. The hybrid recurrent neural network; CNN-LSTM is proposed, to train a network of extracted Facial Action Units representing the compound emotions. The iCV-MEFED is employed in the analysis. The iCV-MEFED dataset is made up of 50 classes of compound emotions which were captured in a controlled environment with the help of professional psychologists. The main contribution of the paper is the use of CNN and LSTM network fusion, with CNN acting as a feature extractor and reducer of feature dimensions, and LSTM for training using its memory blocks. This method has shown a significant improvement in the performance compared to previous studies carried out on the same dataset. The overall performances that are reported for all 50 classes of compound emotions are 24.7% of recognition accuracy, 25.3% of precision, 24.6% of recall and 24% of F1-score. The proposed method achieves comparable and encouraging results and forms a basis for future improvements.
引用
收藏
页码:3415 / 3423
页数:9
相关论文
共 36 条
[1]  
Baltrusaitis T, 2016, IEEE WINT CONF APPL
[2]  
Baltrusaitis T, 2015, IEEE INT CONF AUTOMA
[3]   OpenFace 2.0: Facial Behavior Analysis Toolkit [J].
Baltrusaitis, Tadas ;
Zadeh, Amir ;
Lim, Yao Chong ;
Morency, Louis-Philippe .
PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, :59-66
[4]   Kernel subclass discriminant analysis [J].
Chen, Bo ;
Yuan, Li ;
Liu, Hongwei ;
Bao, Zheng .
NEUROCOMPUTING, 2007, 71 (1-3) :455-458
[5]   A Deep Convolutional Neural Network With Fuzzy Rough Sets for FER [J].
Chen, Xiangjian ;
Li, Di ;
Wang, Pingxin ;
Yang, Xibei .
IEEE ACCESS, 2020, 8 (08) :2772-2779
[6]  
Du SC, 2015, DIALOGUES CLIN NEURO, V17, P443
[7]   Compound facial expressions of emotion [J].
Du, Shichuan ;
Tao, Yong ;
Martinez, Aleix M. .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2014, 111 (15) :E1454-E1462
[8]   Fusing Deep Learned and Hand-Crafted Features of Appearance, Shape, and Dynamics for Automatic Pain Estimation [J].
Egede, Joy ;
Valstar, Michel ;
Martinez, Brais .
2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, :689-696
[9]   CONSTANTS ACROSS CULTURES IN FACE AND EMOTION [J].
EKMAN, P ;
FRIESEN, WV .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1971, 17 (02) :124-&
[10]   Facial Expression Recognition: A Review of Trends and Techniques [J].
Ekundayo, Olufisayo S. ;
Viriri, Serestina .
IEEE ACCESS, 2021, 9 :136944-136973