UniHair : Towards Unified 3D Hair Reconstruction from
Single-View Portraits
SIGGRAPH Asia 2024
Yujian Zheng1, 2
Yuda Qiu2
Leyang Jin2
Chongyang Ma3
Haibin Huang3
Di Zhang3
Pengfei Wan3
Xiaoguang Han2, 1*
1FNii, CUHKSZ 2SSE, CUHKSZ 3Kuaishou Technology
*Corresponding author
[Paper]
[Video]
[Code]
[DataSet]
We proposed a novel strategy which enables single-view 3D hair reconstruction with rich textures and complicated shapes, working for braided and un-braided hairstyles through a unified pipeline. From left to right: input image, reconstructed 3D hair in 3D Gaussian representation and its rendered images from multiple views. Note that, the underlying virtual avatar is just for visualization.
Abstract
Single-view 3D hair reconstruction is challenging, due to the wide range of shape variations among diverse hairstyles.
Current state-of-the-art methods are specialized in recovering un-braided 3D hairs and often take braided styles as their failure cases, because of the inherent difficulty to define priors for complex hairstyles, whether rule-based or data-based. We propose a novel strategy to enable single-view 3D reconstruction for general hair types via a unified pipeline.
To achieve this, we first collect a large-scale synthetic multi-view hair dataset SynMvHair with diverse 3D hair in both braided and un-braided styles, and learn two diffusion priors specialized on hair.
Then we optimize 3D Gaussian-based hair from the priors with two specially designed modules, i.e. view-wise and pixel-wise Gaussian refinement.
Our experiments demonstrate that reconstructing braided and un-braided 3D hair from single-view images via a unified approach is possible and our method achieves the state-of-the-art performance in recovering complex hairstyles.
It is worth to mention that our method shows good generalization ability to real images, although it learns hair priors from synthetic data.
Hairstyle Customization for 3D Avatars
Acknowledgements:
Website template is borrowed from gan_steerability.