论文标题

来自深度学习深度和单个仿射对应关系的相对姿势

Relative Pose from Deep Learned Depth and a Single Affine Correspondence

论文作者

Eichhardt, Ivan, Barath, Daniel

论文摘要

我们提出了一种新方法,将深度学习的非中性单眼深度与仿射对应关系(ACS)相结合,以估计来自单个对应关系的两个校准摄像机的相对姿势。考虑到深度信息和仿射功能,得出了相机姿势上的两个新约束。提出的求解器可在1分兰萨克方法中使用。因此,鲁棒估计的处理时间在对应关系的数量中是线性的,因此比使用传统方法要快。提出的1AC+D求解器均在合成数据和110395公开可用的真实图像对上进行了测试,在那里我们使用了现成的单眼深度网络来提供每个像素的最新深度。提出的1AC+D导致与传统方法相似的准确性,同时显着更快。在解决大规模问题时,与成对的几何验证中获得的加速速度相比,获得ACS和单眼深度相比,获得ACS和单眼深度的开销可忽略不计,即获得ACS和单眼深度的开销,即相对姿势估计。使用最先进的全局SFM算法在1DSFM数据集的场景中证明了这一点。源代码:https://github.com/eivan/oone-ac-pose

We propose a new approach for combining deep-learned non-metric monocular depth with affine correspondences (ACs) to estimate the relative pose of two calibrated cameras from a single correspondence. Considering the depth information and affine features, two new constraints on the camera pose are derived. The proposed solver is usable within 1-point RANSAC approaches. Thus, the processing time of the robust estimation is linear in the number of correspondences and, therefore, orders of magnitude faster than by using traditional approaches. The proposed 1AC+D solver is tested both on synthetic data and on 110395 publicly available real image pairs where we used an off-the-shelf monocular depth network to provide up-to-scale depth per pixel. The proposed 1AC+D leads to similar accuracy as traditional approaches while being significantly faster. When solving large-scale problems, e.g., pose-graph initialization for Structure-from-Motion (SfM) pipelines, the overhead of obtaining ACs and monocular depth is negligible compared to the speed-up gained in the pairwise geometric verification, i.e., relative pose estimation. This is demonstrated on scenes from the 1DSfM dataset using a state-of-the-art global SfM algorithm. Source code: https://github.com/eivan/one-ac-pose

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源