论文标题

相同的对象,不同的掌握:以任务为导向的握把的数据和语义知识

Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping

论文作者

Murali, Adithyavairavan, Liu, Weiyu, Marino, Kenneth, Chernova, Sonia, Gupta, Abhinav

论文摘要

尽管近年来机器人抓握的进展和概括性取得了巨大的进步和概括,但现有方法尚未扩展和推广以任务为导向的握把。这在很大程度上是由于数据集的规模在研究的对象数量和所研究的任务方面。我们使用TaskGrasp数据集来解决这些问题,该数据集在对象和任务方面都更加多样化,并且比以前的数据集更大。该数据集包含250K任务的graSps,用于56个任务和191个对象以及其RGB-D信息。我们利用了数据中的这种新广度和多样性,并介绍了GCNGRASP框架,该框架使用了知识图中编码的对象和任务的语义知识,以推广到新的对象实例,类,甚至是新任务。与不使用语义的基线方法相比,我们的框架在持有设置上显示出约12%的显着改善。我们证明,通过在未知对象上的真实机器人上执行以任务为导向的grasps,我们的数据集和模型适用于现实世界。可以在https://sites.google.com/view/taskgrasp上找到代码,数据和补充视频

Despite the enormous progress and generalization in robotic grasping in recent years, existing methods have yet to scale and generalize task-oriented grasping to the same extent. This is largely due to the scale of the datasets both in terms of the number of objects and tasks studied. We address these concerns with the TaskGrasp dataset which is more diverse both in terms of objects and tasks, and an order of magnitude larger than previous datasets. The dataset contains 250K task-oriented grasps for 56 tasks and 191 objects along with their RGB-D information. We take advantage of this new breadth and diversity in the data and present the GCNGrasp framework which uses the semantic knowledge of objects and tasks encoded in a knowledge graph to generalize to new object instances, classes and even new tasks. Our framework shows a significant improvement of around 12% on held-out settings compared to baseline methods which do not use semantics. We demonstrate that our dataset and model are applicable for the real world by executing task-oriented grasps on a real robot on unknown objects. Code, data and supplementary video could be found at https://sites.google.com/view/taskgrasp

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源