论文标题
RoutedFusion:学习实时深度图融合
RoutedFusion: Learning Real-time Depth Map Fusion
论文作者
论文摘要
深度图的有效融合是大多数最新3D重建方法的关键部分。除了需要高精度外,这些深度融合方法还需要可扩展和实时能力。为此,我们提出了一种新型的基于实时的机器学习的方法,用于深度图融合。与Curless和Levoy的开创性深度图融合方法类似,我们仅更新本地的体素群,以确保实时能力。我们提出了一个神经网络,而不是简单的线性融合,而是预测非线性更新的神经网络,以更好地说明典型的融合错误。我们的网络由2D深度路由网络和3D深度融合网络组成,该网络有效地处理了特定于传感器的噪声和异常值。这对于表面边缘和薄物体特别有用。我们的方法表现优于传统的融合方法和相关的合成和真实数据的学习方法。我们在各种场景上从噪声和离群值污染的数据中重建精细的几何细节来证明我们的方法的性能。
The efficient fusion of depth maps is a key part of most state-of-the-art 3D reconstruction methods. Besides requiring high accuracy, these depth fusion methods need to be scalable and real-time capable. To this end, we present a novel real-time capable machine learning-based method for depth map fusion. Similar to the seminal depth map fusion approach by Curless and Levoy, we only update a local group of voxels to ensure real-time capability. Instead of a simple linear fusion of depth information, we propose a neural network that predicts non-linear updates to better account for typical fusion errors. Our network is composed of a 2D depth routing network and a 3D depth fusion network which efficiently handle sensor-specific noise and outliers. This is especially useful for surface edges and thin objects for which the original approach suffers from thickening artifacts. Our method outperforms the traditional fusion approach and related learned approaches on both synthetic and real data. We demonstrate the performance of our method in reconstructing fine geometric details from noise and outlier contaminated data on various scenes.