HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for Single-View 3D Hair Modeling

被引:15
作者
Zheng, Yujian [1 ,2 ]
Jin, Zirong [2 ]
Li, Moran [3 ]
Huang, Haibin [3 ]
Ma, Chongyang [3 ]
Cui, Shuguang [1 ,2 ]
Han, Xiaoguang [1 ,2 ]
机构
[1] CUHKSZ, FNii, Shenzhen, Peoples R China
[2] CUHKSZ, SSE, Shenzhen, Peoples R China
[3] Kuaishou Technol, Beijing, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
基金
国家重点研发计划;
关键词
CAPTURE;
D O I
10.1109/CVPR52729.2023.01224
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we tackle the challenging problem of learning-based single-view 3D hair modeling. Due to the great difficulty of collecting paired real image and 3D hair data, using synthetic data to provide prior knowledge for real domain becomes a leading solution. This unfortunately introduces the challenge of domain gap. Due to the inherent difficulty of realistic hair rendering, existing methods typically use orientation maps instead of hair images as input to bridge the gap. We firmly think an intermediate representation is essential, but we argue that orientation map using the dominant filtering-based methods is sensitive to uncertain noise and far from a competent representation. Thus, we first raise this issue up and propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map. It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images. Specifically, we collect a dataset of 1,250 portrait images with two types of annotations. A learning framework is further designed to transfer real images to the strand map and depth map. It is noted that, an extra bonus of our new dataset is the first quantitative metric for 3D hair modeling. Our experiments show that HairStep narrows the domain gap between synthetic and real and achieves state-of-the-art performance on single-view 3D hair reconstruction.
引用
收藏
页码:12726 / 12735
页数:10
相关论文
共 45 条
[1]   A Survey of Image - Based Techniques for Hair Modeling [J].
Bao, Yongtang ;
Qi, Yue .
IEEE ACCESS, 2018, 6 :18670-18684
[2]   AutoHair: Fully Automatic Hair Modeling from A Single Image [J].
Chai, Menglei ;
Shao, Tianjia ;
Wu, Hongzhi ;
Weng, Yanlin ;
Zhou, Kun .
ACM TRANSACTIONS ON GRAPHICS, 2016, 35 (04)
[3]   Dynamic Hair Manipulation in Images and Videos [J].
Chai, Menglei ;
Wang, Lvdi ;
Weng, Yanlin ;
Jin, Xiaogang ;
Zhou, Kun .
ACM TRANSACTIONS ON GRAPHICS, 2013, 32 (04)
[4]   Single-View Hair Modeling for Portrait Manipulation [J].
Chai, Menglei ;
Wang, Lvdi ;
Weng, Yanlin ;
Yu, Yizhou ;
Guo, Baining ;
Zhou, Kun .
ACM TRANSACTIONS ON GRAPHICS, 2012, 31 (04)
[5]  
Chen W, 2016, ADV NEUR IN, V29
[6]   Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture [J].
Eigen, David ;
Fergus, Rob .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2650-2658
[7]   Self-Supervised Global-Local Structure Modeling for Point Cloud Domain Adaptation with Reliable Voted Pseudo Labels [J].
Fan, Hehe ;
Chang, Xiaojun ;
Zhang, Wanyue ;
Cheng, Yi ;
Sun, Ying ;
Kankanhalli, Mohan .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :6367-6376
[8]   Automatic photo pop-up [J].
Hoiem, D ;
Efros, AA ;
Hebert, M .
ACM TRANSACTIONS ON GRAPHICS, 2005, 24 (03) :577-584
[9]   Capturing Braided Hairstyles [J].
Hu, Liwen ;
Ma, Chongyang ;
Luo, Linjie ;
Wei, Li-Yi ;
Li, Hao .
ACM TRANSACTIONS ON GRAPHICS, 2014, 33 (06)
[10]   Single-View Hair Modeling Using A Hairstyle Database [J].
Hu, Liwen ;
Ma, Chongyang ;
Luo, Linjie ;
Li, Hao .
ACM TRANSACTIONS ON GRAPHICS, 2015, 34 (04)