论文标题

DYNCA:使用神经细胞自动机的实时动态纹理合成

DyNCA: Real-time Dynamic Texture Synthesis Using Neural Cellular Automata

论文作者

Pajouheshgar, Ehsan, Xu, Yitao, Zhang, Tong, Süsstrunk, Sabine

论文摘要

当前的动态纹理合成(DYTS)模型可以综合现实的视频。但是,它们需要一个缓慢的迭代优化过程来综合单个固定尺寸的简短视频,并且他们不提供对合成过程的训练后控制。我们提出了动态神经细胞自动机(DYNCA),这是实时且可控制的动态纹理合成的框架。我们的方法建立在最近引入的NCA模型的基础上,并且可以实时综合无限且任意大小的逼真的视频纹理。我们对我们的模型进行定量和定性评估,并表明我们的合成视频似乎比现有结果更现实。我们将Sota Dyts的性能提高了$ 2 \ sim 4 $数量级。此外,我们的模型还提供了几种实时视频控件,包括运动速度,运动方向和编辑刷工具。我们在一个在线互动演示中展示了训练有素的模型,该模型可在本地硬件上运行,并且可以在个人计算机和智能手机上访问。

Current Dynamic Texture Synthesis (DyTS) models can synthesize realistic videos. However, they require a slow iterative optimization process to synthesize a single fixed-size short video, and they do not offer any post-training control over the synthesis process. We propose Dynamic Neural Cellular Automata (DyNCA), a framework for real-time and controllable dynamic texture synthesis. Our method is built upon the recently introduced NCA models and can synthesize infinitely long and arbitrary-sized realistic video textures in real time. We quantitatively and qualitatively evaluate our model and show that our synthesized videos appear more realistic than the existing results. We improve the SOTA DyTS performance by $2\sim 4$ orders of magnitude. Moreover, our model offers several real-time video controls including motion speed, motion direction, and an editing brush tool. We exhibit our trained models in an online interactive demo that runs on local hardware and is accessible on personal computers and smartphones.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源