论文标题

使用图生成对抗网络和可区分渲染的纹理生成

Texture Generation Using A Graph Generative Adversarial Network And Differentiable Rendering

论文作者

KC, Dharma, Morrison, Clayton T., Walls, Bradley

论文摘要

新颖的照片现实质地合成是生成新场景的重要任务,包括用于3D模拟的资产生成。但是,迄今为止,这些方法主要在2D空间中生成纹理对象。如果我们依靠2D对象的生成,那么每次更改摄像机的视点或照明时,我们都需要进行计算上昂贵的远程通过。最近可以在3D中产生纹理的工作需要3D组件细分,而获取昂贵。在这项工作中,我们提出了一种新颖的条件生成体系结构,我们称之为图形生成对抗网络(GGAN),可以通过以无监督的方式学习对象组件信息来生成3D中的纹理。在此框架中,每当相机的视点或照明变化时,我们都不需要昂贵的前向通行证,而且我们不需要昂贵的3D零件信息进行培训,但是该模型可以推广到看不见的3D网格并生成适当的新颖3D纹理。我们将这种方法与最先进的纹理生成方法进行了比较,并证明GGAN获得了更好的纹理生成质量(根据Frechet Inception距离)。我们将模型源代码作为开源。

Novel photo-realistic texture synthesis is an important task for generating novel scenes, including asset generation for 3D simulations. However, to date, these methods predominantly generate textured objects in 2D space. If we rely on 2D object generation, then we need to make a computationally expensive forward pass each time we change the camera viewpoint or lighting. Recent work that can generate textures in 3D requires 3D component segmentation that is expensive to acquire. In this work, we present a novel conditional generative architecture that we call a graph generative adversarial network (GGAN) that can generate textures in 3D by learning object component information in an unsupervised way. In this framework, we do not need an expensive forward pass whenever the camera viewpoint or lighting changes, and we do not need expensive 3D part information for training, yet the model can generalize to unseen 3D meshes and generate appropriate novel 3D textures. We compare this approach against state-of-the-art texture generation methods and demonstrate that the GGAN obtains significantly better texture generation quality (according to Frechet inception distance). We release our model source code as open source.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源