Making discourse visible: Coding and animating conversational facial displays

被引:13
作者
DeCarlo, D [1 ]
Revilla, C [1 ]
Stone, M [1 ]
Venditti, JJ [1 ]
机构
[1] Rutgers State Univ, Dept Comp Sci, Piscataway, NJ 08855 USA
来源
CA 2002: PROCEEDINGS OF THE COMPUTER ANIMATION 2002 | 2002年
关键词
D O I
10.1109/CA.2002.1017501
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
People highlight the intended interpretation of their utterances within a larger discourse by a diverse set of nonverbal signals. These signals represent a key challenge for animated conversational agents because they are pervasive, variable, and need to be coordinated-judiciously in an effective contribution to conversation. In this paper we describe a freely-available cross-platform real-time facial animation system, RUTH, that animates such high-level signals in synchrony with speech and lip movements. RUTH adopts an open, layered architecture in which fine-grained features of the animation can be derived by rule from inferred linguistic structure, allowing us to use RUTH, in conjunction with annotation of observed discourse, to investigate the meaningful high-level elements of conversational facial movement for American English speakers.
引用
收藏
页码:11 / 16
页数:6
相关论文
共 36 条
  • [1] André E, 2000, EMBODIED CONVERSATIONAL AGENTS, P220
  • [2] Ball G, 2000, EMBODIED CONVERSATIONAL AGENTS, P189
  • [3] Beckman M., 1997, GUIDELINES TOBI LABE
  • [4] BLACK A, 1997, HCRCTR83 HUM COMM RE
  • [5] Brand M, 1999, COMP GRAPH, P21, DOI 10.1145/311535.311537
  • [6] Cahn JE., 1990, J AM VOICE I O SOC, V8, P1
  • [7] CAROLIS BD, 2001, IJCAI
  • [8] Cassell J, 2001, COMP GRAPH, P477, DOI 10.1145/383259.383315
  • [9] CASSELL J, 2000, 1 INT C NAT LANG GEN, P171
  • [10] CASSELL J, 1994, P COGNITIVE SCI SOC