Cross-dataset emotion recognition from facial expressions through convolutional neural networks

被引:13
|
作者
Dias, William [1 ]
Andalo, Fernanda [1 ]
Padilha, Rafael [1 ]
Bertocco, Gabriel [1 ]
Almeida, Waldir [1 ]
Costa, Paula [2 ]
Rocha, Anderson [1 ]
机构
[1] Univ Estadual Campinas, Inst Comp, Recod Ai, Artificial Intelligence Lab, BR-13083852 Campinas, SP, Brazil
[2] Univ Estadual Campinas, Sch Elect Engn & Comp Engn, BR-13083852 Campinas, SP, Brazil
基金
巴西圣保罗研究基金会;
关键词
Emotion recognition; Facial analysis; Cross-dataset evaluation; Deep learning; INTENSITY ESTIMATION; 3D; DATABASE;
D O I
10.1016/j.jvcir.2021.103395
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The face is the window to the soul. This is what the 19th-century French doctor Duchenne de Boulogne thought. Using electric shocks to stimulate muscular contractions and induce bizarre-looking expressions, he wanted to understand how muscles produce facial expressions and reveal the most hidden human emotions. Two centuries later, this research field remains very active. We see automatic systems for recognizing emotion and facial expression being applied in medicine, security and surveillance systems, advertising and marketing, among others. However, there are still fundamental questions that scientists are trying to answer when analyzing a person's emotional state from their facial expressions. Is it possible to reliably infer someone's internal state based only on their facial muscles' movements? Is there a universal facial setting to express basic emotions such as anger, disgust, fear, happiness, sadness, and surprise? In this research, we seek to address some of these questions through convolutional neural networks. Unlike most studies in the prior art, we are particularly interested in examining whether characteristics learned from one group of people can be generalized to predict another's emotions successfully. In this sense, we adopt a cross-dataset evaluation protocol to assess the performance of the proposed methods. Our baseline is a custom-tailored model initially used in face recognition to categorize emotion. By applying data visualization techniques, we improve our baseline model, deriving two other methods. The first method aims to direct the network's attention to regions of the face considered important in the literature but ignored by the baseline model, using patches to hide random parts of the facial image so that the network can learn discriminative characteristics in different regions. The second method explores a loss function that generates data representations in high-dimensional spaces so that examples of the same emotion class are close and examples of different classes are distant. Finally, we investigate the complementarity between these two methods, proposing a late-fusion technique that combines their outputs through the multiplication of probabilities. We compare our results to an extensive list of works evaluated in the same adopted datasets. In all of them, when compared to works that followed an intra-dataset protocol, our methods present competitive numbers. Under a cross-dataset protocol, we achieve state-of-the-art results, outperforming even commercial off-the-shelf solutions from well-known tech companies.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] Automatic Annotation of Corpora For Emotion Recognition Through Facial Expressions Analysis
    Diamantini, Claudia
    Mircoli, Alex
    Potena, Domenico
    Storti, Emanuele
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5650 - 5657
  • [42] Deep Convolutional and Recurrent Neural Networks for Emotion Recognition from Human Behaviors
    Deng, James J.
    Leung, Clement H. C.
    COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2020, PT II, 2020, 12250 : 550 - 561
  • [43] Cross-dataset Deep Transfer Learning for Activity Recognition
    Gjoreski, Martin
    Kalabakov, Stefan
    Lustrek, Mitja
    Gams, Matjaz
    Gjoreski, Hristijan
    UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, : 714 - 718
  • [44] Real Time Emotion Recognition from Facial Expressions Using CNN Architecture
    Ozdemir, Mehmet Akif
    Elagoz, Berkay
    Alaybeyoglu, Aysegul
    Sadighzadeh, Reza
    Akan, Aydin
    2019 MEDICAL TECHNOLOGIES CONGRESS (TIPTEKNO), 2019, : 417 - 420
  • [45] FERDCNN: an efficient method for facial expression recognition through deep convolutional neural networks
    Rashad, Metwally
    Alebiary, Doaa
    Aldawsari, Mohammed
    Elsawy, Ahmed
    AbuEl-Atta, Ahmed H.
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [46] EEG-based emotion recognition using random Convolutional Neural Networks
    Cheng, Wen Xin
    Gao, Ruobin
    Suganthan, P. N.
    Yuen, Kum Fai
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [47] EEG-based emotion recognition with cascaded convolutional recurrent neural networks
    Meng, Ming
    Zhang, Yu
    Ma, Yuliang
    Gao, Yunyuan
    Kong, Wanzeng
    PATTERN ANALYSIS AND APPLICATIONS, 2023, 26 (02) : 783 - 795
  • [48] Gender Differentiated Convolutional Neural Networks for Speech Emotion Recognition
    Mishra, Puneet
    Sharma, Ruchir
    2020 12TH INTERNATIONAL CONGRESS ON ULTRA MODERN TELECOMMUNICATIONS AND CONTROL SYSTEMS AND WORKSHOPS (ICUMT 2020), 2020, : 142 - 148
  • [49] Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns
    Levi, Gil
    Hassner, Tal
    ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2015, : 503 - 510
  • [50] Facial Expression Based Emotion Recognition Using Neural Networks
    Yagis, Ekin
    Unel, Mustafa
    IMAGE ANALYSIS AND RECOGNITION (ICIAR 2018), 2018, 10882 : 210 - 217