论文标题
Direg3d:直接从多台相机中回归3D手
DIREG3D: DIrectly REGress 3D Hands from Multiple Cameras
论文作者
论文摘要
在本文中,我们提出了Direg3d,这是3D手跟踪的整体框架。所提出的框架能够利用摄像机固有参数,3D几何,中间2D提示和视觉信息来回归参数,以准确地表示手网格模型。我们的实验表明,诸如2D手的大小,与光学中心的距离之类的信息以及径向失真对从单眼信息中衍生出高度可靠的3D姿势非常有用。此外,我们通过从不同角度融合功能来将这些结果扩展到多视图摄像机设置。
In this paper, we present DIREG3D, a holistic framework for 3D Hand Tracking. The proposed framework is capable of utilizing camera intrinsic parameters, 3D geometry, intermediate 2D cues, and visual information to regress parameters for accurately representing a Hand Mesh model. Our experiments show that information like the size of the 2D hand, its distance from the optical center, and radial distortion is useful for deriving highly reliable 3D poses in camera space from just monocular information. Furthermore, we extend these results to a multi-view camera setup by fusing features from different viewpoints.