论文标题
RFD-NET:通过语义实例重建的点场景理解
RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction
论文作者
论文摘要
点云的语义场景理解尤其具有挑战性,因为这些点仅反映了一组稀疏的基础3D几何形状。以前的作品经常将点云转换为常规网格(例如,体素或鸟眼视图图像),并诉诸于基于网格的卷积以进行场景理解。在这项工作中,我们介绍了RFD-NET,该RFD-NET直接从原始点云中直接检测并重建密集的对象表面。我们的方法没有用常规网格表示场景,而是利用点云数据的稀疏性,并专注于预测具有高度对象的形状。通过这种设计,我们将实例重建分解为全局对象本地化和局部形状预测。它不仅从稀疏的3D空间中学习2-D歧管表面的困难,而且每个对象建议中的点云传达了形状细节,这些细节支持隐式功能学习以重建任何高分辨率表面。我们的实验表明实例检测和重建具有互补效应,其中形状预测头对通过现代3D提案网络骨架改善对象检测的效果一致。定性和定量评估进一步表明,我们的方法始终优于最先进的方法,并在对象重建中提高了11个Mesh IOU的超过11个。
Semantic scene understanding from point clouds is particularly challenging as the points reflect only a sparse set of the underlying 3D geometry. Previous works often convert point cloud into regular grids (e.g. voxels or bird-eye view images), and resort to grid-based convolutions for scene understanding. In this work, we introduce RfD-Net that jointly detects and reconstructs dense object surfaces directly from raw point clouds. Instead of representing scenes with regular grids, our method leverages the sparsity of point cloud data and focuses on predicting shapes that are recognized with high objectness. With this design, we decouple the instance reconstruction into global object localization and local shape prediction. It not only eases the difficulty of learning 2-D manifold surfaces from sparse 3D space, the point clouds in each object proposal convey shape details that support implicit function learning to reconstruct any high-resolution surfaces. Our experiments indicate that instance detection and reconstruction present complementary effects, where the shape prediction head shows consistent effects on improving object detection with modern 3D proposal network backbones. The qualitative and quantitative evaluations further demonstrate that our approach consistently outperforms the state-of-the-arts and improves over 11 of mesh IoU in object reconstruction.