TECA: Text-Guided Generation and Editing of Compositional 3D Avatars

被引:0
作者
Zhang, Hao [1 ,3 ,4 ]
Feng, Yao [1 ,2 ]
Kulits, Peter [1 ]
Wen, Yandong [1 ]
Thies, Justus [1 ]
Black, Michael J. [1 ]
机构
[1] Max Planck Inst Intelligent Syst, Stuttgart, Germany
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Tsinghua Univ, Beijing, Peoples R China
[4] Rhein Westfal TH Aachen, Aachen, Germany
来源
2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024 | 2024年
关键词
D O I
10.1109/3DV62453.2024.00151
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description. While this challenge has attracted significant recent interest, existing methods either lack realism, produce unrealistic shapes, or do not support editing, such as modifications to the hairstyle. We argue that existing methods are limited because they employ a monolithic modeling approach, using a single representation for the head, face, hair, and accessories. Our observation is that the hair and face, for example, have very different structural qualities that benefit from different representations. Building on this insight, we generate avatars with a compositional model, in which the head, face, and upper body are represented with traditional 3D meshes, and the hair, clothing, and accessories with neural radiance fields (NeRF). The model-based mesh representation provides a strong geometric prior for the face region, improving realism while enabling editing of the person's appearance. By using NeRFs to represent the remaining components, our method is able to model and synthesize parts with complex geometry and appearance, such as curly hair and fluffy scarves. Our novel system synthesizes these high-quality compositional avatars from text descriptions. Specifically, we generate a face image using text, fit a parametric shape model to it, and inpaint texture using diffusion models. Conditioned on the generated face, we sequentially generate style components such as hair or clothing using Score Distillation Sampling (SDS) with guidance from CLIPSeg segmentations. However, this alone is not sufficient to produce avatars with a high degree of realism. Consequently, we introduce a hierarchical approach to refine the non-face regions using a BLIP-based loss combined with SDS. The experimental results demonstrate that our method, Text-guided generation and Editing of Compositional Avatars (TECA), produces avatars that are more realistic than those of recent methods while being editable because of their compositional nature. For example, our TECA enables the seamless transfer of compositional features like hairstyles, scarves, and other accessories between avatars. This capability supports applications such as virtual try-on. The code and generated avatars will be publicly available for research purposes at yfeng95.github.io/teca.
引用
收藏
页码:1520 / 1530
页数:11
相关论文
共 71 条
[1]   The Digital Emily Project: Achieving a Photorealistic Digital Actor [J].
Alexander, Oleg ;
Rogers, Mike ;
Lambeth, William ;
Chiang, Jen-Yuan ;
Ma, Wan-Chun ;
Wang, Chuan-Chang ;
Debevec, Paul .
IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2010, 30 (04) :20-31
[2]  
Alexander Oleg, 2013, ACM SIGGRAPH 2013 PO, V3
[3]   PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360° [J].
An, Sizhe ;
Xu, Hongyi ;
Shi, Yichun ;
Song, Guoxian ;
Ogras, Umit Y. ;
Luo, Linjie .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :20950-20959
[4]  
Aneja Shivangi, 2023, Clipface: Text-guided editing of textured 3d morphable models
[5]  
[Anonymous], 2023, Metahuman, P2
[6]   Constant-roll in the Palatini-R2 models [J].
Antoniadis, Ignation ;
Lykkas, Angelos ;
Tamvakis, Kyriakos .
JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS, 2020, (04)
[7]   A morphable model for the synthesis of 3D faces [J].
Blanz, V ;
Vetter, T .
SIGGRAPH 99 CONFERENCE PROCEEDINGS, 1999, :187-194
[8]  
Borshukov George, 2005, ACM SIGGRAPH 2005 CO, P3
[9]   How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks) [J].
Bulat, Adrian ;
Tzimiropoulos, Georgios .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1021-1030
[10]   Text and Image Guided 3D Avatar Generation and Manipulation [J].
Canfes, Zehranaz ;
Atasoy, M. Furkan ;
Dirik, Alara ;
Yanardag, Pinar .
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, :4410-4420