论文标题

运动引导深度动态3D服装

Motion Guided Deep Dynamic 3D Garments

论文作者

Zhang, Meng, Ceylan, Duygu, Mitra, Niloy J.

论文摘要

动画字符上的现实动态服装具有许多AR/VR应用程序。尽管创作这种动态服装几何形状仍然是一项艰巨的任务,但数据驱动的模拟提供了一个有吸引力的替代方案,尤其是如果可以简单地使用基础字符的运动来控制它。在这项工作中,我们专注于动态3D服装,尤其是对于松散的服装。在数据驱动的设置中,我们首先学习了合理服装几何形状的生成空间。然后,我们学会了对这个空间的映射,以捕获运动依赖性动态变形,该变形在服装的先前状态以及相对于基本体方面的相对位置。从技术上讲,我们通过在服装的规范状态下预测富含框架依赖的皮肤重量的服装状态的人均局部位移来对服装动力学进行建模,从而将服装带入全球空间。我们通过预测剩余的局部位移来解决所有剩余的人均碰撞。所得的服装几何形状被用作历史记录,以实现迭代推出预测。我们证明了对看不见的身体形状和运动输入的合理概括,并在多个最新的替代方案中表现出改善。

Realistic dynamic garments on animated characters have many AR/VR applications. While authoring such dynamic garment geometry is still a challenging task, data-driven simulation provides an attractive alternative, especially if it can be controlled simply using the motion of the underlying character. In this work, we focus on motion guided dynamic 3D garments, especially for loose garments. In a data-driven setup, we first learn a generative space of plausible garment geometries. Then, we learn a mapping to this space to capture the motion dependent dynamic deformations, conditioned on the previous state of the garment as well as its relative position with respect to the underlying body. Technically, we model garment dynamics, driven using the input character motion, by predicting per-frame local displacements in a canonical state of the garment that is enriched with frame-dependent skinning weights to bring the garment to the global space. We resolve any remaining per-frame collisions by predicting residual local displacements. The resultant garment geometry is used as history to enable iterative rollout prediction. We demonstrate plausible generalization to unseen body shapes and motion inputs, and show improvements over multiple state-of-the-art alternatives.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源