论文标题
神经特征融合场:自我监督2D图像表示的3D蒸馏
Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations
论文作者
论文摘要
我们提出神经特征融合场(N3F),当将后者应用于分析多个图像作为3D场景时,该方法可改善密集的2D图像特征提取器。给定图像功能提取器,例如使用自我训练的预训练,N3F将其用作老师来学习在3D空间中定义的学生网络。 3D学生网络类似于蒸馏说功能的神经辐射领域,可以使用通常的可区分渲染机械进行培训。结果,N3F很容易适用于大多数神经渲染制剂,包括香草nerf及其扩展到复杂的动态场景。我们表明,我们的方法不仅可以在不使用手动标签的情况下在场景特定的神经领域的上下文中实现语义理解,而且还可以始终如一地改善自我监督的2D基线。通过考虑各种任务,例如2D对象检索,3D细分和场景编辑,包括各种序列,包括史诗般的基金斯基准中的长时间以中心的视频,可以证明这一点。
We present Neural Feature Fusion Fields (N3F), a method that improves dense 2D image feature extractors when the latter are applied to the analysis of multiple images reconstructible as a 3D scene. Given an image feature extractor, for example pre-trained using self-supervision, N3F uses it as a teacher to learn a student network defined in 3D space. The 3D student network is similar to a neural radiance field that distills said features and can be trained with the usual differentiable rendering machinery. As a consequence, N3F is readily applicable to most neural rendering formulations, including vanilla NeRF and its extensions to complex dynamic scenes. We show that our method not only enables semantic understanding in the context of scene-specific neural fields without the use of manual labels, but also consistently improves over the self-supervised 2D baselines. This is demonstrated by considering various tasks, such as 2D object retrieval, 3D segmentation, and scene editing, in diverse sequences, including long egocentric videos in the EPIC-KITCHENS benchmark.