论文标题
各种结构的照片形状材料转移
Photo-to-Shape Material Transfer for Diverse Structures
论文作者
论文摘要
我们引入了一种以自动方式为3D形状分配可逼真的可靠材料的方法。我们的方法将真实对象的照片示例和带有分割的3D对象输入,并使用示例来指导材料将材料分配到形状的各个部分,以使所得形状的外观与示例尽可能相似。为了实现这一目标,我们的方法将图像翻译神经网络与物质分配神经网络相结合。图像翻译网络将颜色从示例转换为3D形状的投影以及从投影到示例的零件分割。然后,材料预测网络根据材料的翻译图像和知觉相似性,将材料从逼真的材料集合到投影零件。我们方法的一个关键思想是使用翻译网络在示例和形状投影之间建立对应关系,这使我们能够在具有不同结构的对象之间传输材料。我们方法的另一个关键思想是使用图像翻译提供的两对(颜色,分割)图像来指导材料分配,这使我们能够确保分配中的一致性。我们证明了我们的方法使我们能够将材料分配给形状,从而使它们的外观更好地类似于输入示例,从而提高了与最新方法相比,结果的质量,并允许我们自动创建具有高质量的光电材料的数千种形状。本文的代码和数据可在https://github.com/xiangyusu611/tmt上获得。
We introduce a method for assigning photorealistic relightable materials to 3D shapes in an automatic manner. Our method takes as input a photo exemplar of a real object and a 3D object with segmentation, and uses the exemplar to guide the assignment of materials to the parts of the shape, so that the appearance of the resulting shape is as similar as possible to the exemplar. To accomplish this goal, our method combines an image translation neural network with a material assignment neural network. The image translation network translates the color from the exemplar to a projection of the 3D shape and the part segmentation from the projection to the exemplar. Then, the material prediction network assigns materials from a collection of realistic materials to the projected parts, based on the translated images and perceptual similarity of the materials. One key idea of our method is to use the translation network to establish a correspondence between the exemplar and shape projection, which allows us to transfer materials between objects with diverse structures. Another key idea of our method is to use the two pairs of (color, segmentation) images provided by the image translation to guide the material assignment, which enables us to ensure the consistency in the assignment. We demonstrate that our method allows us to assign materials to shapes so that their appearances better resemble the input exemplars, improving the quality of the results over the state-of-the-art method, and allowing us to automatically create thousands of shapes with high-quality photorealistic materials. Code and data for this paper are available at https://github.com/XiangyuSu611/TMT.