Emotion Classification Using a Tensorflow Generative Adversarial Network Implementation

被引:11
作者
Caramihale, Traian [1 ]
Popescu, Dan [1 ]
Ichim, Loretta [1 ]
机构
[1] Univ Politehn Bucuresti, Dept Control Engn & Ind Informat, Bucharest 060042, Romania
来源
SYMMETRY-BASEL | 2018年 / 10卷 / 09期
关键词
generative adversarial network; emotion classification; facial key point detection; facial images processing; convolutional neural networks; FACE RECOGNITION; NEGATIVE EMOTIONS;
D O I
10.3390/sym10090414
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The detection of human emotions has applicability in various domains such as assisted living, health monitoring, domestic appliance control, crowd behavior tracking real time, and emotional security. The paper proposes a new system for emotion classification based on a generative adversarial network (GAN) classifier. The generative adversarial networks have been widely used for generating realistic images, but the classification capabilities have been vaguely exploited. One of the main advantages is that by using the generator, we can extend our testing dataset and add more variety to each of the seven emotion classes we try to identify. Thus, the novelty of our study consists in increasing the number of classes from N to 2N (in the learning phase) by considering real and fake emotions. Facial key points are obtained from real and generated facial images, and vectors connecting them with the facial center of gravity are used by the discriminator to classify the image as one of the 14 classes of interest (real and fake for seven emotions). As another contribution, real images from different emotional classes are used in the generation process unlike the classical GAN approach which generates images from simple noise arrays. By using the proposed method, our system can classify emotions in facial images regardless of gender, race, ethnicity, age and face rotation. An accuracy of 75.2% was obtained on 7000 real images (14,000, also considering the generated images) from multiple combined facial datasets.
引用
收藏
页数:19
相关论文
共 57 条
[1]  
AL-Allaf O.N.A., 2014, International Journal of Multimedia Its Applications, V6, P1, DOI [10.5121/ijma.2014.6101, DOI 10.5121/IJMA.2014.6101]
[2]  
[Anonymous], 2017, ARXIV170903842
[3]  
[Anonymous], P 5 INT C MULT RETR
[4]  
[Anonymous], 2017, ARXIV170902848
[5]  
[Anonymous], ARXIV171000977
[6]  
[Anonymous], 2016, FACIAL EXPRESSION RE
[7]  
[Anonymous], P 5 INT C MIN INT KN
[8]  
[Anonymous], 2015, Tech Report
[9]  
[Anonymous], 2016, J BIOSENS BIOELECTRO, DOI DOI 10.4172/2155-6210.1000210
[10]  
[Anonymous], 2017, ARXIV170601509