论文标题

高保真3D GAN反转,通过伪杂种视图优化

High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization

论文作者

Xie, Jiaxin, Ouyang, Hao, Piao, Jingtan, Lei, Chenyang, Chen, Qifeng

论文摘要

我们提出了一个高保真的3D生成对抗网络(GAN)反转框架,该框架可以合成照片现实的新视图,同时保留输入图像的特定细节。高保真3D GAN反转由于3D反转中的几何形状 - 质量折衷而固有的挑战,在这种情况下,对单个视图输入图像的过度拟合通常会损害潜在优化过程中估计的几何形状。为了解决这一挑战,我们提出了一条新型的管道,该管道基于伪惊人的视图估计,并通过可见性分析。我们保留可见零件的原始纹理,并利用遮挡零件的生成先验。广泛的实验表明,我们的方法可以实现有利的重建和新颖的观点综合质量,即使对于具有分布质地的图像也是如此。所提出的管道还可以通过倒立潜在代码和3D感知纹理修改来启用图像属性编辑。我们的方法使高保真3D从单个图像中呈现,这对于AI生成的3D内容的各种应用都是有希望的。

We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views while preserving specific details of the input image. High-fidelity 3D GAN inversion is inherently challenging due to the geometry-texture trade-off in 3D inversion, where overfitting to a single view input image often damages the estimated geometry during the latent optimization. To solve this challenge, we propose a novel pipeline that builds on the pseudo-multi-view estimation with visibility analysis. We keep the original textures for the visible parts and utilize generative priors for the occluded parts. Extensive experiments show that our approach achieves advantageous reconstruction and novel view synthesis quality over state-of-the-art methods, even for images with out-of-distribution textures. The proposed pipeline also enables image attribute editing with the inverted latent code and 3D-aware texture modification. Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源