论文标题
具有全球照明的可靠小说合成合成的神经辐射转移场
Neural Radiance Transfer Fields for Relightable Novel-view Synthesis with Global Illumination
论文作者
论文摘要
给定一组场景的图像,从新颖的视图和照明条件中重新渲染了该场景是计算机视觉和图形中的一个重要且充满挑战的问题。一方面,计算机视觉中的大多数现有作品通常对图像形成过程施加许多假设,例如直接照明和预定义的材料,以使场景参数估计可进行。另一方面,成熟的计算机图形工具允许对所有场景参数进行复杂的照片现实光传输的建模。结合了这些方法,我们提出了一种通过学习神经预先计算的辐射转移函数在新观点下重新重新考虑的方法,该方法使用新颖的环境图隐含地处理全球照明效应。在单个未知的照明条件下,我们的方法可以仅在场景的一组真实图像上进行监督。为了消除训练期间的任务,我们在训练过程中紧密整合了可区分的路径示踪剂,并提出了合成的OLAT和真实图像丢失的组合。结果表明,场景参数的恢复分离在目前的现状状态下显着改善,因此,我们的重新渲染结果也更加现实和准确。
Given a set of images of a scene, the re-rendering of this scene from novel views and lighting conditions is an important and challenging problem in Computer Vision and Graphics. On the one hand, most existing works in Computer Vision usually impose many assumptions regarding the image formation process, e.g. direct illumination and predefined materials, to make scene parameter estimation tractable. On the other hand, mature Computer Graphics tools allow modeling of complex photo-realistic light transport given all the scene parameters. Combining these approaches, we propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function, which implicitly handles global illumination effects using novel environment maps. Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition. To disambiguate the task during training, we tightly integrate a differentiable path tracer in the training process and propose a combination of a synthesized OLAT and a real image loss. Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art and, thus, also our re-rendering results are more realistic and accurate.