Multimodal semi-automated affect detection from conversational cues, gross body language, and facial features

被引:0
作者
Sidney K. D’Mello
Arthur Graesser
机构
[1] University of Memphis,Institute for Intelligent Systems
来源
User Modeling and User-Adapted Interaction | 2010年 / 20卷
关键词
Multimodal affect detection; Conversational cues; Gross body language; Facial features; Superadditivity; AutoTutor; Affective computing; Human-computer interaction;
D O I
暂无
中图分类号
学科分类号
摘要
We developed and evaluated a multimodal affect detector that combines conversational cues, gross body language, and facial features. The multimodal affect detector uses feature-level fusion to combine the sensory channels and linear discriminant analyses to discriminate between naturally occurring experiences of boredom, engagement/flow, confusion, frustration, delight, and neutral. Training and validation data for the affect detector were collected in a study where 28 learners completed a 32- min. tutorial session with AutoTutor, an intelligent tutoring system with conversational dialogue. Classification results supported a channel × judgment type interaction, where the face was the most diagnostic channel for spontaneous affect judgments (i.e., at any time in the tutorial session), while conversational cues were superior for fixed judgments (i.e., every 20 s in the session). The analyses also indicated that the accuracy of the multichannel model (face, dialogue, and posture) was statistically higher than the best single-channel model for the fixed but not spontaneous affect expressions. However, multichannel models reduced the discrepancy (i.e., variance in the precision of the different emotions) of the discriminant models for both judgment types. The results also indicated that the combination of channels yielded superadditive effects for some affective states, but additive, redundant, and inhibitory effects for others. We explore the structure of the multimodal linear discriminant models and discuss the implications of some of our major findings.
引用
收藏
页码:147 / 187
页数:40
相关论文
共 145 条
  • [1] Anderson J.(1995)Cognitive tutors: Lessons learned J. Learn. Sci. 4 167-207
  • [2] Corbett A.(2010)Better to be frustrated than bored: The incidence and persistence of affect during interactions with three different computer-based learning environments Int. J. Hum.-Comput. Stud. 68 223-241
  • [3] Koedinger K.(1996)Acoustic profiles in vocal emotion expression J. Pers. Soc. Psychol. 70 614-636
  • [4] Pelletier R.(2006)Are emotions natural kinds? Perspect. Psychol. Sci. 1 28-58
  • [5] Baker R.(2007)The experience of emotion Ann. Rev. Psychol. 58 373-403
  • [6] D’Mello S.(1981)Mood and memory Am. Psychol. 36 129-148
  • [7] Rodrigo M.(2007)Evidence for gender specific approaches to the development of emotionally intelligent learning companions IEEE Intell. Syst. 22 62-69
  • [8] Graesser A.(2008)Automated analysis of body movement in emotionally expressive piano performances Music Percept. 26 103-119
  • [9] Banse R.(2008)Observing tutorial dialogues collaboratively: Insights about human tutoring effectiveness from vicarious learning Cogn. Sci. 32 301-341
  • [10] Scherer K.(1960)A coefficient of agreement for nominal scales Educ. Psychol. Meas. 20 37-46