论文标题

RSS-NET:使用FMCW雷达弱监督的多级语义分割

RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW Radar

论文作者

Kaul, Prannay, De Martini, Daniele, Gadd, Matthew, Newman, Paul

论文摘要

本文提出了一个有效的注释程序,并使用FMCW扫描雷达对感知的环境进行了丰富的语义分割,并采用了其应用程序。我们提倡雷达,而不是用于此任务的传统传感器,因为它在更长的范围内运行,并且对不利的天气和照明条件更为强大。我们通过利用迄今为止收集的以雷达为重点的城市自主性数据集来避免使用费力的手动标记,从而将雷达扫描与RGB摄像机和LIDAR传感器相关联,而语义细分是已经合并的程序。该培训程序利用了一种最新的自然图像分割系统,该系统可公开可用,因此,与以前的方法相比,通过合并四个相机和两个LiDAR流,可以为雷达流提供大量标签。此外,通过沿着当前车辆位置前后的姿势链累积激光雷达回报,将损失计算到雷达传感器范围内的损失。最后,我们使用多通道雷达扫描输入呈现网络,以处理短暂和动态场景对象。

This paper presents an efficient annotation procedure and an application thereof to end-to-end, rich semantic segmentation of the sensed environment using FMCW scanning radar. We advocate radar over the traditional sensors used for this task as it operates at longer ranges and is substantially more robust to adverse weather and illumination conditions. We avoid laborious manual labelling by exploiting the largest radar-focused urban autonomy dataset collected to date, correlating radar scans with RGB cameras and LiDAR sensors, for which semantic segmentation is an already consolidated procedure. The training procedure leverages a state-of-the-art natural image segmentation system which is publicly available and as such, in contrast to previous approaches, allows for the production of copious labels for the radar stream by incorporating four camera and two LiDAR streams. Additionally, the losses are computed taking into account labels to the radar sensor horizon by accumulating LiDAR returns along a pose-chain ahead and behind of the current vehicle position. Finally, we present the network with multi-channel radar scan inputs in order to deal with ephemeral and dynamic scene objects.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源