论文标题

MM手:3D手段的3D多模式指导性手生成网络用于3D手姿势合成

MM-Hand: 3D-Aware Multi-Modal Guided Hand Generative Network for 3D Hand Pose Synthesis

论文作者

Wu, Zhenyu, Hoang, Duc, Lin, Shih-Yao, Xie, Yusheng, Chen, Liangjian, Lin, Yen-Yu, Wang, Zhangyang, Fan, Wei

论文摘要

从单眼RGB图像估算3D手姿势很重要,但具有挑战性。解决方案是对具有准确的3D手关键点注释的大规模RGB手图像进行培训。但是,在实践中它太贵了。取而代之的是,我们开发了一种基于学习的方法来综合3D姿势信息的指导下的现实,多样化和3D姿势的手部图像。我们提出了一个3D感知的多模式引导的手力生成网络(MM手),以及一种新颖的基于几何的课程学习策略。我们广泛的实验结果表明,由MM手的定性和定量表现出现有选项的3D注册图像。此外,增强数据可以始终如一地提高两个基准数据集上最新的3D手姿势估计器的定量性能。该代码将在https://github.com/scotthoang/mm手中提供。

Estimating the 3D hand pose from a monocular RGB image is important but challenging. A solution is training on large-scale RGB hand images with accurate 3D hand keypoint annotations. However, it is too expensive in practice. Instead, we have developed a learning-based approach to synthesize realistic, diverse, and 3D pose-preserving hand images under the guidance of 3D pose information. We propose a 3D-aware multi-modal guided hand generative network (MM-Hand), together with a novel geometry-based curriculum learning strategy. Our extensive experimental results demonstrate that the 3D-annotated images generated by MM-Hand qualitatively and quantitatively outperform existing options. Moreover, the augmented data can consistently improve the quantitative performance of the state-of-the-art 3D hand pose estimators on two benchmark datasets. The code will be available at https://github.com/ScottHoang/mm-hand.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源