论文标题
提升2D型for 3d-Aware Face Generation
Lifting 2D StyleGAN for 3D-Aware Face Generation
论文作者
论文摘要
我们提出了一个称为Ludtedgan的框架,该框架拆开并提升了预先训练的stylegan2,用于3D吸引的面部生成。我们的模型是“ 3D感知”,因为它能够(1)将stylegan2的潜在空间分解为纹理,形状,视点,照明和(2)生成3D组件,以渲染合成图像。与大多数以前的方法不同,我们的方法是完全自制的,即,它既不需要任何手动注释,也不需要3DMM模型进行培训。取而代之的是,它通过用可区分的渲染器将先验知识提炼出来,从而学会了生成图像及其3D组件。所提出的模型能够同时输出3D形状和纹理,从而可以对生成的图像进行明确的姿势和照明控制。定性和定量结果表明,我们的方法优于现有方法,而对3D可控制的gan的内容可控性中的可控性具有优势,同时产生了现实的高质量图像。
We propose a framework, called LiftedGAN, that disentangles and lifts a pre-trained StyleGAN2 for 3D-aware face generation. Our model is "3D-aware" in the sense that it is able to (1) disentangle the latent space of StyleGAN2 into texture, shape, viewpoint, lighting and (2) generate 3D components for rendering synthetic images. Unlike most previous methods, our method is completely self-supervised, i.e. it neither requires any manual annotation nor 3DMM model for training. Instead, it learns to generate images as well as their 3D components by distilling the prior knowledge in StyleGAN2 with a differentiable renderer. The proposed model is able to output both the 3D shape and texture, allowing explicit pose and lighting control over generated images. Qualitative and quantitative results show the superiority of our approach over existing methods on 3D-controllable GANs in content controllability while generating realistic high quality images.