DiffusionRig: Learning Personalized Priors for Facial Appearance Editing

被引:25
作者
Ding, Zheng [1 ]
Zhang, Xuaner [2 ]
Xia, Zhihao [2 ]
Jebe, Lars [2 ]
Tu, Zhuowen [1 ]
Zhang, Xiuming [2 ]
机构
[1] Univ Calif San Diego, La Jolla, CA 92093 USA
[2] Adobe, San Jose, CA USA
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.01225
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We address the problem of learning person-specific facial priors from a small number (e.g., 20) of portrait photos of the same person. This enables us to edit this specific person's facial appearance, such as expression and lighting, while preserving their identity and high-frequency facial details. Key to our approach, which we dub DiffusionRig, is a diffusion model conditioned on, or "rigged by," crude 3D face models estimated from single in-the-wild images by an off-the-shelf estimator. On a high level, DiffusionRig learns to map simplistic renderings of 3D face models to realistic photos of a given person. Specifically, DiffusionRig is trained in two stages: It first learns generic facial priors from a large-scale face dataset and then person-specific priors from a small portrait photo collection of the person of interest. By learning the CGI-to-photo mapping with such personalized priors, DiffusionRig can "rig" the lighting, facial expression, head pose, etc. of a portrait photo, conditioned only on coarse 3D models while preserving this person's identity and other high-frequency characteristics. Qualitative and quantitative experiments show that DiffusionRig outperforms existing approaches in both identity preservation and photorealism. Please see the project website: https://diffusionrig.github.io for the supplemental material, video, code, and data.
引用
收藏
页码:12736 / 12746
页数:11
相关论文
共 54 条
[1]  
[Anonymous], 2021, ADV NEURAL INFORM PR, DOI DOI 10.1080/20477724.2021.1951556
[2]  
[Anonymous], 2021, P IEEE CVF C COMP VI, DOI DOI 10.1109/TPAMI.2020.2970919
[3]  
Bi Sai, 2003, DEEP RELIGHTABLE APP, V40, p15 3
[4]  
Blanz V., 1999, Proceedings of the 26th annual conference on computer graphics and interactive techniques, P187, DOI DOI 10.1145/311535.311556
[5]   pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis [J].
Chan, Eric R. ;
Monteiro, Marco ;
Kellnhofer, Petr ;
Wu, Jiajun ;
Wetzstein, Gordon .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :5795-5805
[6]  
Chan Eric R., 2022, ARXIV211207945
[7]   Perception Prioritized Training of Diffusion Models [J].
Choi, Jooyoung ;
Lee, Jungbeom ;
Shin, Chaehun ;
Kim, Sungwon ;
Kim, Hyunwoo ;
Yoon, Sungroh .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :11462-11471
[8]  
Debevec P, 2000, COMP GRAPH, P145, DOI 10.1145/344779.344855
[9]  
Egger Bernhard, 2020, ARXIV190901815CS
[10]   Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [J].
Feng, Yao ;
Feng, Haiwen ;
Black, Michael J. ;
Bolkart, Timo .
ACM TRANSACTIONS ON GRAPHICS, 2021, 40 (04)