论文标题
深度转移:使用非参数抽样从视频中提取深度
DepthTransfer: Depth Extraction from Video Using Non-parametric Sampling
论文作者
论文摘要
我们描述了一种技术,该技术会使用非参数深度采样自动从视频中生成合理的深度图。在过去的方法失败的情况下(非翻译摄像机和动态场景),我们证明了我们的技术。我们的技术适用于单个图像和视频。对于视频,我们使用本地运动提示来改善推断的深度图,而光流则用于确保时间深度一致性。为了进行培训和评估,我们使用基于Kinect的系统来收集包含具有已知深度的立体视频的大型数据集。我们表明,我们的深度估计技术优于基准数据库的最新技术。我们的技术可用于自动将单镜视频转换为立体声,以进行3D可视化,我们通过各种视觉上令人愉悦的室内和室外场景的结果来证明这一点,包括故事片Charade的结果。
We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail (non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large dataset containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.