论文标题

动态场景的神经场景图

Neural Scene Graphs for Dynamic Scenes

论文作者

Ost, Julian, Mannan, Fahim, Thuerey, Nils, Knodt, Julian, Heide, Felix

论文摘要

最近的隐式神经渲染方法表明,可以通过仅通过一组RGB图像来预测其体积密度和颜色来学习复杂场景的准确视图合成。但是,现有方法仅限于学习将所有场景对象编码为单个神经网络的静态场景的有效表示,并且缺乏将动态场景和分解为单个场景对象的能力。在这项工作中,我们提出了第一种神经渲染方法,将动态场景分解为场景图。我们提出了一个学习的场景图表,该图形表示编码对象转换和光芒,以有效地渲染现场的新颖布置和观点。为此,我们学习了隐式编码的场景,并结合了共同学习的潜在表示,以描述具有单个隐式函数的对象。我们评估了有关合成和真实汽车数据的提议方法,验证了我们的方法学习动态场景 - 仅通过观察该场景的视频,并允许在看不见的姿势中以看不见的对象集的新颖场景组成的新颖照片现实观点。

Recent implicit neural rendering methods have demonstrated that it is possible to learn accurate view synthesis for complex scenes by predicting their volumetric density and color supervised solely by a set of RGB images. However, existing methods are restricted to learning efficient representations of static scenes that encode all scene objects into a single neural network, and lack the ability to represent dynamic scenes and decompositions into individual scene objects. In this work, we present the first neural rendering method that decomposes dynamic scenes into scene graphs. We propose a learned scene graph representation, which encodes object transformation and radiance, to efficiently render novel arrangements and views of the scene. To this end, we learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function. We assess the proposed method on synthetic and real automotive data, validating that our approach learns dynamic scenes -- only by observing a video of this scene -- and allows for rendering novel photo-realistic views of novel scene compositions with unseen sets of objects at unseen poses.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源