ARTICULATED MOTION AND DEFORMABLE OBJECTS, PROCEEDINGS
|
2006年
/
4069卷
关键词:
facial expression;
multimodal interface;
D O I:
暂无
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of 10 characteristic points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a data-base of 399 images. For the moment, the method is applied to static images. Application to sequences is being now developed. The extraction of such information about the user is of great interest for the development of new multimodal user interfaces.
机构:
Dongduk Womens Univ, Div Interdisciplinary Studies Cultural Intelligenc, Seoul 02748, South KoreaYonsei Univ, Dept Elect & Elect Engn, Seoul 03722, South Korea
机构:
Kings Coll London, Div Psychol Med, Inst Psychiat, London SE5 8AF, EnglandKings Coll London, Div Psychol Med, Inst Psychiat, London SE5 8AF, England
Davies, Helen
Schmidt, Ulrike
论文数: 0引用数: 0
h-index: 0
机构:
Kings Coll London, Div Psychol Med, Inst Psychiat, London SE5 8AF, EnglandKings Coll London, Div Psychol Med, Inst Psychiat, London SE5 8AF, England
Schmidt, Ulrike
Tchanturia, Kate
论文数: 0引用数: 0
h-index: 0
机构:
Kings Coll London, Div Psychol Med, Inst Psychiat, London SE5 8AF, EnglandKings Coll London, Div Psychol Med, Inst Psychiat, London SE5 8AF, England