论文标题
Infocus:使用动态信息建模的自动驾驶的3D对象检测
InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic Information Modeling
论文作者
论文摘要
实时3D对象检测对于自动驾驶汽车至关重要。基于体素的方法以高效的效率达到有希望的表现,受到了很大的关注。但是,以前的方法对输入空间进行建模,其特征从同样分开的子区域提取而不考虑该点云通常在整个空间上分布不均匀。为了解决这个问题,我们提出了一个具有动态信息建模的新颖的3D对象检测框架。所提出的框架以粗到精细的方式设计。粗略的预测是通过基于体素的区域提案网络在第一阶段生成的。我们介绍了Infocus,从而通过点云密度的信息为指导的自适应精炼特征改善了粗糙检测。实验是在大型Nuscenes 3D检测基准上进行的。结果表明,我们的框架以31 fps实现了最先进的性能,并在Nuscenes测试集上显着提高了9.0%的地图。
Real-time 3D object detection is crucial for autonomous cars. Achieving promising performance with high efficiency, voxel-based approaches have received considerable attention. However, previous methods model the input space with features extracted from equally divided sub-regions without considering that point cloud is generally non-uniformly distributed over the space. To address this issue, we propose a novel 3D object detection framework with dynamic information modeling. The proposed framework is designed in a coarse-to-fine manner. Coarse predictions are generated in the first stage via a voxel-based region proposal network. We introduce InfoFocus, which improves the coarse detections by adaptively refining features guided by the information of point cloud density. Experiments are conducted on the large-scale nuScenes 3D detection benchmark. Results show that our framework achieves the state-of-the-art performance with 31 FPS and improves our baseline significantly by 9.0% mAP on the nuScenes test set.