论文标题

从单视图像中学习姿势不变的3D对象重建

Learning Pose-invariant 3D Object Reconstruction from Single-view Images

论文作者

Peng, Bo, Wang, Wei, Dong, Jing, Tan, Tieniu

论文摘要

学习使用2D图像重建3D形状是一个主动的研究主题,不需要昂贵的3D数据。但是,大多数朝这个方向上的工作都需要每个对象实例作为培训监督的多视图图像,而实际上,这通常不适用于实践。在本文中,我们放宽了常见的多视图假设,并探索仅从单视图图像中学习3D形状的更具挑战性,更现实的设置。主要的困难在于单个视图图像无法提供的约束不足,这导致了学习形状空间中姿势纠缠的问题。结果,重建的形状随输入姿势而变化,精度较差。我们通过采用一种新颖的域适应性观点来解决这个问题,并提出了一种有效的对抗领域混乱方法来学习姿势脱离的紧凑形状空间。对单视图重建的实验显示出在解决姿势纠缠方面的有效性,并且所提出的方法以更高效率的最先进的方法实现了PAR重建精度。

Learning to reconstruct 3D shapes using 2D images is an active research topic, with benefits of not requiring expensive 3D data. However, most work in this direction requires multi-view images for each object instance as training supervision, which oftentimes does not apply in practice. In this paper, we relax the common multi-view assumption and explore a more challenging yet more realistic setup of learning 3D shape from only single-view images. The major difficulty lies in insufficient constraints that can be provided by single view images, which leads to the problem of pose entanglement in learned shape space. As a result, reconstructed shapes vary along input pose and have poor accuracy. We address this problem by taking a novel domain adaptation perspective, and propose an effective adversarial domain confusion method to learn pose-disentangled compact shape space. Experiments on single-view reconstruction show effectiveness in solving pose entanglement, and the proposed method achieves on-par reconstruction accuracy with state-of-the-art with higher efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源