论文标题
利用时间深度来改善视频中的3D人姿势估计
Leveraging Temporal Joint Depths for Improving 3D Human Pose Estimation in Video
论文作者
论文摘要
已经证明了从视频的每个帧中估计的2D姿势预测3D姿势的方法的有效性,以进行3D人体姿势估计。但是,没有人的外观信息的2D姿势与关节深度有很大的歧义。在本文中,我们建议在视频的每一帧中估算一个3D姿势,并考虑到时间信息。提出的方法降低了关节深度的歧义,并提高了3D姿势估计的精度。
The effectiveness of the approaches to predict 3D poses from 2D poses estimated in each frame of a video has been demonstrated for 3D human pose estimation. However, 2D poses without appearance information of persons have much ambiguity with respect to the joint depths. In this paper, we propose to estimate a 3D pose in each frame of a video and refine it considering temporal information. The proposed approach reduces the ambiguity of the joint depths and improves the 3D pose estimation accuracy.