Exploring Sources of Variation in Human Behavioral Data: Towards Automatic Audio-Visual Emotion Recognition

被引:0
作者
Kim, Yelin [1 ]
机构
[1] Univ Michigan, Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
来源
2015 INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII) | 2015年
关键词
affective computing; emotion recognition; emotion estimation; variation; multimodal; temporal; human perception; CLASSIFICATION; SPEECH;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.
引用
收藏
页码:748 / 753
页数:6
相关论文
共 42 条
[1]   MULTIDIMENSIONAL-SCALING OF FACIAL EXPRESSIONS [J].
ABELSON, RP ;
SERMAT, V .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1962, 63 (06) :546-&
[2]  
[Anonymous], P ACM INT C MULT ACM
[3]  
[Anonymous], ACM T INTERACTIVE IN
[4]  
[Anonymous], 2010, P 3 INT WORKSHOP AFF, DOI DOI 10.1145/1877826.1877831
[5]  
[Anonymous], 2007, P 2007 ACM SIGMOD IN
[6]  
[Anonymous], AFFECTIVE COMPUTING
[7]  
[Anonymous], 2014, DEPRESSION
[8]  
[Anonymous], FG
[9]  
[Anonymous], IEEE INT C AUT FAC G
[10]  
[Anonymous], INTERSPEECH