论文标题

集成以更现实的点目标导航代理为中心的本地化

Integrating Egocentric Localization for More Realistic Point-Goal Navigation Agents

论文作者

Datta, Samyak, Maksymets, Oleksandr, Hoffman, Judy, Lee, Stefan, Batra, Dhruv, Parikh, Devi

论文摘要

最近的工作提出了具有几乎完美精确度的新型室内环境中的目标目标。但是,这些试剂配备了理想化的传感器来定位并采取确定性的行动。与现实世界中嘈杂的传感器和作用的肮脏现实相比,这种设置实际上是无菌的 - 车轮可以滑动,运动传感器有错误,动作可能会反弹。在这项工作中,我们朝着这种嘈杂的现实迈出了一步,开发了依赖于嘈杂的动作动力学下的视觉估计的点目标导航代理。我们发现这些试剂的表现优于当前点目标药物的天真适应,以及那些结合经典定位基线的固定剂。此外,我们的模型从概念上将学习代理的动力学或探测器(我在哪里?)与特定于任务的导航策略(我想去哪里?)。这可以通过简单地重新校准视觉探视模型来更改动态(不同的机器人或地板类型)的无缝调整 - 规避重新训练导航策略的费用。我们的经纪人是CVPR 2020栖息地挑战的PointNAV赛道的亚军。

Recent work has presented embodied agents that can navigate to point-goal targets in novel indoor environments with near-perfect accuracy. However, these agents are equipped with idealized sensors for localization and take deterministic actions. This setting is practically sterile by comparison to the dirty reality of noisy sensors and actuations in the real world -- wheels can slip, motion sensors have error, actuations can rebound. In this work, we take a step towards this noisy reality, developing point-goal navigation agents that rely on visual estimates of egomotion under noisy action dynamics. We find these agents outperform naive adaptions of current point-goal agents to this setting as well as those incorporating classic localization baselines. Further, our model conceptually divides learning agent dynamics or odometry (where am I?) from task-specific navigation policy (where do I want to go?). This enables a seamless adaption to changing dynamics (a different robot or floor type) by simply re-calibrating the visual odometry model -- circumventing the expense of re-training of the navigation policy. Our agent was the runner-up in the PointNav track of CVPR 2020 Habitat Challenge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源