论文标题
摄像头集成:语义映射的概率传感器融合
Camera-Lidar Integration: Probabilistic sensor fusion for semantic mapping
论文作者
论文摘要
在城市环境中运行的自动车辆必须能够感知和识别三维世界中的对象/障碍,同时在不断变化的环境中导航。为了计划和执行准确的复杂驾驶演习,对周围环境的高级上下文理解至关重要。由于图像处理的最新进展,现在可以从单眼相机中获得2D中的高清语义信息,尽管相机无法可靠地提供激光器提供的高度准确的3D信息。这两个传感器方式的融合可以克服每个传感器的缺点,尽管需要以概率方式解决许多重要的挑战。在本文中,我们解决了常见但具有挑战性的激光镜/摄像头/语义融合问题,这些问题很少以完全概率的方式解决。我们的方法能够使用多传感器平台来构建三维语义素素化图,以考虑所有涉及的过程的不确定性。我们提出了一条概率管道,该管道结合了传感器读数(相机,激光镜头,IMU和车轮编码器)的不确定性,对车辆运动的补偿以及语义图像的启发式标签概率。我们还提出了一种新颖而有效的观点验证算法,以检查相机框架的遮挡。从相机图像到激光点云执行概率投影。然后,每个标记的LiDAR扫描都进食OCTREE MAP构建算法,该算法每次可用时都会更新地图体素的类概率。我们使用USYD数据集上的一组定性和定量实验测试来验证我们的方法。
An automated vehicle operating in an urban environment must be able to perceive and recognise object/obstacles in a three-dimensional world while navigating in a constantly changing environment. In order to plan and execute accurate sophisticated driving maneuvers, a high-level contextual understanding of the surroundings is essential. Due to the recent progress in image processing, it is now possible to obtain high definition semantic information in 2D from monocular cameras, though cameras cannot reliably provide the highly accurate 3D information provided by lasers. The fusion of these two sensor modalities can overcome the shortcomings of each individual sensor, though there are a number of important challenges that need to be addressed in a probabilistic manner. In this paper, we address the common, yet challenging, lidar/camera/semantic fusion problems which are seldom approached in a wholly probabilistic manner. Our approach is capable of using a multi-sensor platform to build a three-dimensional semantic voxelized map that considers the uncertainty of all of the processes involved. We present a probabilistic pipeline that incorporates uncertainties from the sensor readings (cameras, lidar, IMU and wheel encoders), compensation for the motion of the vehicle, and heuristic label probabilities for the semantic images. We also present a novel and efficient viewpoint validation algorithm to check for occlusions from the camera frames. A probabilistic projection is performed from the camera images to the lidar point cloud. Each labelled lidar scan then feeds into an octree map building algorithm that updates the class probabilities of the map voxels every time a new observation is available. We validate our approach using a set of qualitative and quantitative experimental tests on the USyd Dataset.