论文标题
HandVoxnet:基于深素的网络,用于从单个深度图中进行3D手形状和姿势估算
HandVoxNet: Deep Voxel-Based Network for 3D Hand Shape and Pose Estimation from a Single Depth Map
论文作者
论文摘要
来自单个深度图的3D手形状和姿势估计是许多应用程序的新的且具有挑战性的计算机视觉问题。最新的方法通过2D卷积神经网络直接从2D深度图像从2D深度图像回归,这导致由于图像中的透视扭曲而导致的估算中的人工制品。相比之下,我们提出了一种新颖的建筑,其3D卷积以弱监督的方式训练。我们方法的输入是一个3D素化的深度图,我们依靠两个手形表示。第一个是形状的3D素化网格,它是准确的,但不能保留网格拓扑和网格顶点的数量。第二个表示是3D手表面,其精确度较低,但不会受到第一个表示的局限性。我们通过将手表面记录到体素的手形状来结合这两种表示的优势。在广泛的实验中,在Synhand5M数据集上,提出的方法对技术的状态提高了47.8%。此外,我们针对体素深度图的增强政策进一步提高了3D手姿势估计实际数据的准确性。与现有方法相比,我们的方法在NYU和BIGHAND 2.20万数据集上产生更合理和现实的手形。
3D hand shape and pose estimation from a single depth map is a new and challenging computer vision problem with many applications. The state-of-the-art methods directly regress 3D hand meshes from 2D depth images via 2D convolutional neural networks, which leads to artefacts in the estimations due to perspective distortions in the images. In contrast, we propose a novel architecture with 3D convolutions trained in a weakly-supervised manner. The input to our method is a 3D voxelized depth map, and we rely on two hand shape representations. The first one is the 3D voxelized grid of the shape which is accurate but does not preserve the mesh topology and the number of mesh vertices. The second representation is the 3D hand surface which is less accurate but does not suffer from the limitations of the first representation. We combine the advantages of these two representations by registering the hand surface to the voxelized hand shape. In the extensive experiments, the proposed approach improves over the state of the art by 47.8% on the SynHand5M dataset. Moreover, our augmentation policy for voxelized depth maps further enhances the accuracy of 3D hand pose estimation on real data. Our method produces visually more reasonable and realistic hand shapes on NYU and BigHand2.2M datasets compared to the existing approaches.