论文标题
投射的城市纹理
Projective Urban Texturing
论文作者
论文摘要
本文提出了一种在沉浸式城市环境中自动生成3D城市网格的纹理的方法。许多最近的管道使用扫描仪或程序建模管道捕获或合成大量城市几何形状。这种几何形状是复杂且现实的,但是对于如此大的场景而产生的光真实纹理仍然是一个问题。我们建议生成用于输入目标3D网格的纹理,该网格由随时可用的全景照片数据集中的纹理样式驱动,以捕获城市环境。将这样的2D数据集重新定位到3D几何学是具有挑战性的,因为照片中城市结构的基本形状,大小和布局与目标网格中的形状不符。照片通常还具有目标几何形状中甚至可能不存在的对象(例如树木,车辆)。为了解决这些问题,我们提出了一种称为投射城市纹理(PUT)的方法,该方法重新定位了从现实世界全景图像到看不见的城市网格的纹理风格。 Pot依赖于针对未配对图像到文本翻译的神经体系结构的对比和对抗训练。生成的纹理存储在应用于目标3D网格几何形状的纹理地图集中。为了促进纹理一致性,PUT采用了迭代过程,在该过程中,纹理合成的条件是先前生成的相邻纹理。我们证明了生成纹理的定量和定性评估。
This paper proposes a method for automatic generation of textures for 3D city meshes in immersive urban environments. Many recent pipelines capture or synthesize large quantities of city geometry using scanners or procedural modeling pipelines. Such geometry is intricate and realistic, however the generation of photo-realistic textures for such large scenes remains a problem. We propose to generate textures for input target 3D meshes driven by the textural style present in readily available datasets of panoramic photos capturing urban environments. Re-targeting such 2D datasets to 3D geometry is challenging because the underlying shape, size, and layout of the urban structures in the photos do not correspond to the ones in the target meshes. Photos also often have objects (e.g., trees, vehicles) that may not even be present in the target geometry. To address these issues we present a method, called Projective Urban Texturing (PUT), which re-targets textural style from real-world panoramic images to unseen urban meshes. PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation. The generated textures are stored in a texture atlas applied to the target 3D mesh geometry. To promote texture consistency, PUT employs an iterative procedure in which texture synthesis is conditioned on previously generated, adjacent textures. We demonstrate both quantitative and qualitative evaluation of the generated textures.