论文标题
3D网眼的无监督形状和姿势分离
Unsupervised Shape and Pose Disentanglement for 3D Meshes
论文作者
论文摘要
人类,面部,手和动物的参数模型已被广泛用于一系列任务,例如基于图像的重建,形状对应估计和动画。它们的关键强度是将表面变化分为形状和构成依赖性组件的能力。学习这样的模型需要大量的专家知识和手动定义的特定对象的约束,使学习方法无法与新物体相关。在本文中,我们提出了一种简单而有效的方法,可以在无监督的环境中学习脱离形状并构成表示形式。我们结合使用自矛盾和跨矛盾限制来学习姿势和形成已注册网格的空间。我们还将刚性的变形(ARAP)纳入训练环中,以避免溶液退化。我们通过多种任务(包括姿势转移和形状检索)来证明学习表示的有用性。在3D人,脸,手和动物数据集上的实验证明了我们方法的普遍性。代码可在https://virtualhumans.mpi-inf.mpg.de/unsup_shape_pose/上提供。
Parametric models of humans, faces, hands and animals have been widely used for a range of tasks such as image-based reconstruction, shape correspondence estimation, and animation. Their key strength is the ability to factor surface variations into shape and pose dependent components. Learning such models requires lots of expert knowledge and hand-defined object-specific constraints, making the learning approach unscalable to novel objects. In this paper, we present a simple yet effective approach to learn disentangled shape and pose representations in an unsupervised setting. We use a combination of self-consistency and cross-consistency constraints to learn pose and shape space from registered meshes. We additionally incorporate as-rigid-as-possible deformation(ARAP) into the training loop to avoid degenerate solutions. We demonstrate the usefulness of learned representations through a number of tasks including pose transfer and shape retrieval. The experiments on datasets of 3D humans, faces, hands and animals demonstrate the generality of our approach. Code is made available at https://virtualhumans.mpi-inf.mpg.de/unsup_shape_pose/.