Perceptual integration of kinematic components in the recognition of emotional facial expressions

被引:10
作者
Chiovetto, Enrico [1 ]
Curio, Cristobal [2 ,3 ]
Endres, Dominik [4 ,5 ]
Giese, Martin [1 ]
机构
[1] Univ Clin, Dept Cognit Neurol, Sect Computat Sensomotor, Tubingen, Germany
[2] Reutlingen Univ, Dept Comp Sci, Cognit Syst Grp, Tubingen, Germany
[3] Max Planck Inst Biol Cybernet, Tubingen, Germany
[4] UMR, Dept Psychol, Theoret Neurosci Grp, Marburg, Germany
[5] UKT, Dept Cognit Neurol, Sect Computat Sensomotor, Tubingen, Germany
基金
欧盟地平线“2020”;
关键词
dynamic facial expressions; emotions; synergies; motor primitives; emotion perception; MUSCLE SYNERGIES; BLIND SEPARATION; MOTION; INFORMATION; PRIMITIVES; SIMULATION; MOVEMENTS; PATTERNS; IDENTITY; MODEL;
D O I
10.1167/18.4.13
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units,'' which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.
引用
收藏
页码:1 / 19
页数:19
相关论文
共 70 条
[1]   NEW LOOK AT STATISTICAL-MODEL IDENTIFICATION [J].
AKAIKE, H .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1974, AC19 (06) :716-723
[2]  
[Anonymous], 1983, EMFACS EMOTIONAL FAC
[3]  
[Anonymous], 1992, Review of personality and social psychology, DOI DOI 10.4135/9781483325333.N2
[4]  
Bartlett M. S., 2006, Journal of Multimedia, V1, DOI 10.4304/jmm.1.6.22-35
[5]   Emotion simulation and expression understanding: A case for time [J].
Bartlett, Marian Stewart .
BEHAVIORAL AND BRAIN SCIENCES, 2010, 33 (06) :434-U70
[6]   Face recognition by independent component analysis [J].
Bartlett, MS ;
Movellan, JR ;
Sejnowski, TJ .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (06) :1450-1464
[7]  
Beck T., 2014, J VISION, V14, P838, DOI [10.1167/14.10.838, DOI 10.1167/14.10.838.]
[8]   Modular Control of Pointing beyond Arm's Length [J].
Berret, Bastien ;
Bonnetblanc, Francois ;
Papaxanthis, Charalambos ;
Pozzo, Thierry .
JOURNAL OF NEUROSCIENCE, 2009, 29 (01) :191-205
[9]  
Bishop C.M., 2006, J ELECTRON IMAGING, V16, P049901, DOI DOI 10.1117/1.2819119
[10]   Combining modules for movement [J].
Bizzi, E. ;
Cheung, V. C. K. ;
d'Avella, A. ;
Saltiel, P. ;
Tresch, M. .
BRAIN RESEARCH REVIEWS, 2008, 57 (01) :125-133