论文标题
特征 - 视线损失,用于自我监督的深度和自我学习
Feature-metric Loss for Self-supervised Learning of Depth and Egomotion
论文作者
论文摘要
光度损失被广泛用于自我监督的深度和自我估计。然而,光度差异引起的损失景观通常是有问题的,这是由于无纹理区域的像素或多个局部最小值的高原景观引起的,对于较小的歧视性像素。在这项工作中,在特征表示上提出并定义了特征 - 金属损失,其中还以自我监督的方式学习了特征表示形式,并通过一阶和二阶导数正规化,以限制损失景观以形成适当的收敛盆地。通过可视化进行了全面的实验和详细的分析,证明了提出的特征 - 测度损失的有效性。特别是,我们的方法将KITTI的最新方法从0.885提高到0.925,以$Δ_1$测量,以实现深度估计,并且明显优于先前的视觉渗透测定方法。
Photometric loss is widely used for self-supervised depth and egomotion estimation. However, the loss landscapes induced by photometric differences are often problematic for optimization, caused by plateau landscapes for pixels in textureless regions or multiple local minima for less discriminative pixels. In this work, feature-metric loss is proposed and defined on feature representation, where the feature representation is also learned in a self-supervised manner and regularized by both first-order and second-order derivatives to constrain the loss landscapes to form proper convergence basins. Comprehensive experiments and detailed analysis via visualization demonstrate the effectiveness of the proposed feature-metric loss. In particular, our method improves state-of-the-art methods on KITTI from 0.885 to 0.925 measured by $δ_1$ for depth estimation, and significantly outperforms previous method for visual odometry.