论文标题
3D神经田间使用三平方扩散
3D Neural Field Generation using Triplane Diffusion
论文作者
论文摘要
除其他任务外,扩散模型已成为图像生成的最新模型。在这里,我们提出了一个有效的基于基于扩散的模型,用于3D感知的神经场。我们的方法通过将其转换为连续的占用场并将其考虑为一组轴对准的三平方特征表示形式,从而预先处理训练数据,例如Shapenet网格。因此,我们的3D训练场景均由2D特征平面代表,我们可以直接训练这些表示形式的现有2D扩散模型,以生成具有高质量和多样性的3D神经场,表现优于3D吸引生成的替代方法。我们的方法需要对现有的三平分解管道进行的必要修改,以使所得的功能易于学习扩散模型。我们在Shapenet的几个对象类别上展示了3D生成的最新结果。
Diffusion models have emerged as the state-of-the-art for image generation, among other tasks. Here, we present an efficient diffusion-based model for 3D-aware generation of neural fields. Our approach pre-processes training data, such as ShapeNet meshes, by converting them to continuous occupancy fields and factoring them into a set of axis-aligned triplane feature representations. Thus, our 3D training scenes are all represented by 2D feature planes, and we can directly train existing 2D diffusion models on these representations to generate 3D neural fields with high quality and diversity, outperforming alternative approaches to 3D-aware generation. Our approach requires essential modifications to existing triplane factorization pipelines to make the resulting features easy to learn for the diffusion model. We demonstrate state-of-the-art results on 3D generation on several object classes from ShapeNet.