Representing Affective Facial Expressions for Robots and Embodied Conversational Agents by Facial Landmarks

被引:10
作者
Liu, Caixia [1 ,2 ]
Ham, Jaap [1 ]
Postma, Eric [2 ]
Midden, Cees [1 ]
Joosten, Bart [2 ]
Goudbeek, Martijn [2 ]
机构
[1] Eindhoven Univ Technol, Dept Ind Engn & Innovat Sci, Human Technol Interact Grp, NL-5600 MB Eindhoven, Netherlands
[2] Tilburg Univ, Tilburg Ctr Cognit & Commun, NL-5000 LE Tilburg, Netherlands
关键词
Robots; Embodied conversational agents; Emotion; Facial expression; Facial landmarks; FaceTracker; PERCEPTION; EMOTION; RECOGNITION; MOTION;
D O I
10.1007/s12369-013-0208-9
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Affective robots and embodied conversational agents require convincing facial expressions to make them socially acceptable. To be able to virtually generate facial expressions, we need to investigate the relationship between technology and human perception of affective and social signals. Facial landmarks, the locations of the crucial parts of a face, are important for perception of the affective and social signals conveyed by facial expressions. Earlier research did not use that kind of technology, but rather used analogue technology to generate point-light faces. The goal of our study is to investigate whether digitally extracted facial landmarks contain sufficient information to enable the facial expressions to be recognized by humans. This study presented participants with facial expressions encoded in moving landmarks, while these facial landmarks correspond to the facial-landmark videos that were extracted by face analysis software from full-face videos of acted emotions. The facial-landmark videos were presented to 16 participants who were instructed to classify the sequences according to the emotion represented. Results revealed that for three out of five facial-landmark videos (happiness, sadness and anger), participants were able to recognize emotions accurately, but for the other two facial-landmark videos (fear and disgust), their recognition accuracy was below chance, suggesting that landmarks contain information about the expressed emotions. Results also show that emotions with high levels of arousal and valence are better recognized than those with low levels of arousal and valence. We argue that the question of whether these digitally extracted facial landmarks are a basis for representing facial expressions of emotions is crucial for the development of successful humanrobot interaction in the future. We conclude by stating that landmarks provide a basis for the virtual generation of emotions in humanoid agents, and discuss how additional facial information might be included to provide a sufficient basis for faithful emotion identification.
引用
收藏
页码:619 / 626
页数:8
相关论文
共 20 条
[1]   CREATING A PHOTOREAL DIGITAL ACTOR: THE DIGITAL EMILY PROJECT [J].
Alexander, Oleg ;
Rogers, Mike ;
Lambeth, William ;
Chiang, Matt ;
Debevec, Paul .
2009 CONFERENCE FOR VISUAL MEDIA PRODUCTION: CVMP 2009, 2009, :176-187
[2]  
[Anonymous], 1997, ser. Studies in emotion and social interaction, 2nd series
[3]  
Aviezer H., 2008, First Impressions, P255
[4]  
Banziger T., 2010, Blueprint for affective computing: A sourcebook, DOI DOI 10.1037/A0025827
[5]  
Banziger T, 2011, EMOTION, DOI 10.137/a0025827
[6]   FACIAL MOTION IN PERCEPTION OF FACES AND OF EMOTIONAL EXPRESSION [J].
BASSILI, JN .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 1978, 4 (03) :373-379
[7]   Emotion and sociable humanoid robots [J].
Breazeal, C .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2003, 59 (1-2) :119-155
[8]  
Breazeal CL, DESIGNING SOCIAL ROB
[9]  
Breazeal CL, 2000, THESIS MIT, P178
[10]   Visualization of Facial Expression Deformation Applied to the Mechanism Improvement of Face Robot [J].
Cheng, Li-Chieh ;
Lin, Chyi-Yeu ;
Huang, Chun-Chia .
INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2013, 5 (04) :423-439