论文标题

JVLDLOC:在驾驶场景中的视觉范围约束和方向先验的联合优化

JVLDLoc: a Joint Optimization of Visual-LiDAR Constraints and Direction Priors for Localization in Driving Scenario

论文作者

Dong, Longrui, Zeng, Gang

论文摘要

移动代理在环境中本地化的能力是对新兴应用程序的基本需求,例如自动驾驶等。许多基于多个传感器的现有方法仍然遭受漂移的影响。我们提出了一个融合图像先验和消失点的方案,该方案可以建立一个仅在旋转上限制的能量项,称为方向投影误差。然后,我们将这些方向的先验嵌入了视觉上的大满贯系统中,该系统以紧密耦合的方式集成了相机和激光雷达测量。具体而言,我们的方法会生成视觉再卷曲误差,并指向扫描约束的隐式移动最小平方(IML)表面,并在全局优化时共同求解它们以及方向投影误差。 Kitti,Kitti-360和牛津雷达机器人的实验表明,与先前的MAP相比,我们实现了较低的定位误差或绝对姿势误差(APE),这证实了我们的方法有效。

The ability for a moving agent to localize itself in environment is the basic demand for emerging applications, such as autonomous driving, etc. Many existing methods based on multiple sensors still suffer from drift. We propose a scheme that fuses map prior and vanishing points from images, which can establish an energy term that is only constrained on rotation, called the direction projection error. Then we embed these direction priors into a visual-LiDAR SLAM system that integrates camera and LiDAR measurements in a tightly-coupled way at backend. Specifically, our method generates visual reprojection error and point to Implicit Moving Least Square(IMLS) surface of scan constraints, and solves them jointly along with direction projection error at global optimization. Experiments on KITTI, KITTI-360 and Oxford Radar Robotcar show that we achieve lower localization error or Absolute Pose Error (APE) than prior map, which validates our method is effective.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源