论文标题

螺丝网:使用螺丝理论从深度图像中估算独立于类别的发音模型

ScrewNet: Category-Independent Articulation Model Estimation From Depth Images Using Screw Theory

论文作者

Jain, Ajinkya, Lioutikov, Rudolf, Chuck, Caleb, Niekum, Scott

论文摘要

人类环境中的机器人将需要与各种各样的铰接物体(例如橱柜,抽屉和洗碗机)进行交互,同时协助人类执行日常任务。现有方法要么需要对象进行纹理,要么需要知道估计铰接对象的模型参数的先验符号模型类别。我们提出了一种新颖的方法,它直接从深度图像中估算了对象的发音模型,而无需先验了解表达模型类别。螺丝网使用螺丝理论来统一不同的发音类型的表示,并执行与类别无关的铰接模型估计。我们在两个基准数据集上评估了我们的方法,并将其性能与当前最新方法进行比较。结果表明,螺丝网可以成功估计跨发音模型类别的新对象的发音模型及其参数,其平均准确性比以前的最新方法更好。项目网页:https://pearl-utexas.github.io/screwnet/

Robots in human environments will need to interact with a wide variety of articulated objects such as cabinets, drawers, and dishwashers while assisting humans in performing day-to-day tasks. Existing methods either require objects to be textured or need to know the articulation model category a priori for estimating the model parameters for an articulated object. We propose ScrewNet, a novel approach that estimates an object's articulation model directly from depth images without requiring a priori knowledge of the articulation model category. ScrewNet uses screw theory to unify the representation of different articulation types and perform category-independent articulation model estimation. We evaluate our approach on two benchmarking datasets and compare its performance with a current state-of-the-art method. Results demonstrate that ScrewNet can successfully estimate the articulation models and their parameters for novel objects across articulation model categories with better on average accuracy than the prior state-of-the-art method. Project webpage: https://pearl-utexas.github.io/ScrewNet/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源