Real-time Expression Cloning using Appearance Models

被引:0
|
作者
Theobald, Barry-John [1 ]
Matthews, Iain A. [2 ]
Cohn, Jeffrey F. [2 ]
Boker, Steven M. [3 ]
机构
[1] Univ East Anglia, Sch Comp Sci, Norwich NR4 7TJ, Norfolk, England
[2] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA USA
[3] Univ Virginia, Dept Psychol, Charlottesville, VA USA
来源
ICMI'07: PROCEEDINGS OF THE NINTH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES | 2007年
基金
英国工程与自然科学研究理事会;
关键词
active appearance models; facial animation; expression cloning;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Active Appearance Models (AAMs) are generative parametric models commonly used to track, recognise and synthesise faces in images and video sequences. In this paper we describe a method for transferring dynamic facial gestures between subjects in real-time. The main advantages of out-approach are that: 1) the mapping is computed automatically and does riot require high-level semantic information describing facial expressions or visual speech gestures. 2) The mapping is simple, and intuitive,, allowing expressions to be transferred and rendered in real-time. 3) The mapped expression can be constrained to have the appearance of the target producing the expression, rather than the source expression imposed onto the target face. 4) Near-videorealistic talking faces for new subjects can be created without the cost of recording and processing a complete training corpus for each. Our system enables face-to-face interaction with air avatar driven by an AAM of an actual person in real-time and we show examples of arbitrary expressive speech frames cloned across different subjects.
引用
收藏
页码:134 / +
页数:3
相关论文
共 50 条
  • [31] REAL-TIME CONTROL OF 3D FACIAL ANIMATION
    Luo, Changwei
    Yu, Jun
    Jiang, Chen
    Li, Rui
    Wang, Zengfu
    2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2014,
  • [32] VolTeMorph: Real-time, Controllable and Generalizable Animation of Volumetric Representations
    Garbin, Stephan J.
    Kowalski, Marek
    Estellers, Virginia
    Szymanowicz, Stanislaw
    Rezaeifar, Shideh
    Shen, Jingjing
    Johnson, Matthew A.
    Valentin, Julien
    COMPUTER GRAPHICS FORUM, 2024, 43 (06)
  • [33] Video-Audio Driven Real-Time Facial Animation
    Liu, Yilong
    Xu, Feng
    Chai, Jinxiang
    Tong, Xin
    Wang, Lijuan
    Huo, Qiang
    ACM TRANSACTIONS ON GRAPHICS, 2015, 34 (06):
  • [34] SYNTHESIZING REAL-TIME SPEECH-DRIVEN FACIAL ANIMATION
    Luo, Changwei
    Yu, Jun
    Wang, Zengfu
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [35] Functional cloning using pFB retroviral cDNA expression libraries
    Katherine A. Felts
    Keith Chen
    Kim Zaharee
    Latha Sundar
    Jamie Limjoco
    Anna Miller
    Peter Vaillancourt
    Molecular Biotechnology, 2002, 22 : 25 - 32
  • [36] Active Appearance-Motion Models for endocardial contour detection in time sequences of echocardiograms
    Bosch, HG
    Mitchell, SC
    Lelieveldt, BPF
    Nijland, F
    Kamp, O
    Sonka, M
    Reiber, JHC
    MEDICAL IMAGING: 2001: IMAGE PROCESSING, PTS 1-3, 2001, 4322 : 257 - 268
  • [37] Functional cloning using pFB retroviral cDNA expression libraries
    Felts, KA
    Chen, K
    Zaharee, K
    Sundar, L
    Limjoco, J
    Miller, A
    Vaillancourt, P
    MOLECULAR BIOTECHNOLOGY, 2002, 22 (01) : 25 - 32
  • [38] Action unit classification using active appearance models and conditional random fields
    van der Maaten, Laurens
    Hendriks, Emile
    COGNITIVE PROCESSING, 2012, 13 : 507 - 518
  • [39] Real-time Facial Animation with Image-based Dynamic Avatars
    Cao, Chen
    Wu, Hongzhi
    Weng, Yanlin
    Shao, Tianjia
    Zhou, Kun
    ACM TRANSACTIONS ON GRAPHICS, 2016, 35 (04):
  • [40] Emotion-Preserving Blendshape Update With Real-Time Face Tracking
    Wang, Zhibo
    Ling, Jingwang
    Feng, Chengzeng
    Lu, Ming
    Xu, Feng
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (06) : 2364 - 2375