论文标题

具有深层多模式融合网络的复杂环境中的自主导航

Autonomous Navigation in Complex Environments with Deep Multimodal Fusion Network

论文作者

Nguyen, Anh, Nguyen, Ngoc, Tran, Kim, Tjiputra, Erman, Tran, Quang D.

论文摘要

在复杂环境中的自主导航是时间敏感的情况下的至关重要的任务,例如灾难响应或搜索和救援。但是,复杂的环境由于具有挑战性的特性而构成了巨大的挑战:狭窄的通道,不稳定的途径,碎屑和障碍物的不稳定途径,或不规则的地质结构和较差的照明条件。在这项工作中,我们提出了一种多模式融合方法,以解决在复杂环境(例如倒塌的引用或自然洞穴)中自动导航的问题。我们首先在基于物理的仿真引擎中模拟复杂的环境,并收集大型数据集用于培训。然后,我们提出了一个导航多模式融合网络(NMFNET),该网络具有三个分支,可有效处理三种视觉方式:激光,RGB图像和点云数据。广泛的实验结果表明,我们的NMFNET在实现实时性能的同时,以相当大的利润优于最新的艺术状态。我们进一步表明,在复杂环境中,使用多种方式对于自动导航至关重要。最后,我们成功地将网络部署到模拟和真实的移动机器人。

Autonomous navigation in complex environments is a crucial task in time-sensitive scenarios such as disaster response or search and rescue. However, complex environments pose significant challenges for autonomous platforms to navigate due to their challenging properties: constrained narrow passages, unstable pathway with debris and obstacles, or irregular geological structures and poor lighting conditions. In this work, we propose a multimodal fusion approach to address the problem of autonomous navigation in complex environments such as collapsed cites, or natural caves. We first simulate the complex environments in a physics-based simulation engine and collect a large-scale dataset for training. We then propose a Navigation Multimodal Fusion Network (NMFNet) which has three branches to effectively handle three visual modalities: laser, RGB images, and point cloud data. The extensively experimental results show that our NMFNet outperforms recent state of the art by a fair margin while achieving real-time performance. We further show that the use of multiple modalities is essential for autonomous navigation in complex environments. Finally, we successfully deploy our network to both simulated and real mobile robots.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源