论文标题
在人类操纵动作上的运动编码框架的比较
Comparison of Motion Encoding Frameworks on Human Manipulation Actions
论文作者
论文摘要
运动产生,尤其是对看不见的情况的概括,在机器人技术中起着重要作用。存在不同类型的运动生成方法,例如基于样条的方法,基于动力学系统的方法以及基于高斯混合模型(GMM)的方法。在本文中,使用有关人类操纵的大型新数据集,我们提供了五种根本不同且广泛使用的运动编码和生成框架的高度详细比较:动态运动原始剂(DMP),基于时间的高斯混合回归(TBGMR),稳定的动态系统(sedbilistic Primitive),Prombilistic(Propibilistic)(oc Pripp)(Propabilistic)(oc Pripp)(oc)(oc)(oc)(oc)。我们将这些框架与它们的运动编码效率,重建精度和运动概括能力进行了比较。新的数据集由12人执行的9个对象操纵操作组成:挑选和位置,放在顶部/取下,放下/放下/取出,隐藏/露面,并用总共7,652个运动示例推出/拉动。我们的分析表明,如果使用足够数量的核,对于参数的数量和重建精度,对于运动编码和重建的DMP和OCP是最有效的。在运动对新的起始点和终点情况的运动概括的情况下,DMP,OCP和任务参数化GMM(TP-GMM,基于TBGMR的运动概括框架)会导致相似的性能,只有在使用许多学习示范进行学习时,就可以提出启动。所有型号都优于SED,事实证明这很难适应。此外,我们观察到TP-GMM和SED遇到了达到概括的终点问题的问题。这些不同的定量结果将有助于在未来的机器人应用中以改进的任务依赖性方式选择最合适的模型和设计轨迹表示。
Movement generation, and especially generalisation to unseen situations, plays an important role in robotics. Different types of movement generation methods exist such as spline based methods, dynamical system based methods, and methods based on Gaussian mixture models (GMMs). Using a large, new dataset on human manipulations, in this paper we provide a highly detailed comparison of five fundamentally different and widely used movement encoding and generation frameworks: dynamic movement primitives (DMPs), time based Gaussian mixture regression (tbGMR), stable estimator of dynamical systems (SEDS), Probabilistic Movement Primitives (ProMP) and Optimal Control Primitives (OCP). We compare these frameworks with respect to their movement encoding efficiency, reconstruction accuracy, and movement generalisation capabilities. The new dataset consists of nine object manipulation actions performed by 12 humans: pick and place, put on top/take down, put inside/take out, hide/uncover, and push/pull with a total of 7,652 movement examples. Our analysis shows that for movement encoding and reconstruction DMPs and OCPs are the most efficient with respect to the number of parameters and reconstruction accuracy, if a sufficient number of kernels is used. In case of movement generalisation to new start- and end-point situations, DMPs, OCPs and task parameterized GMM (TP-GMM, movement generalisation framework based on tbGMR) lead to similar performance, which ProMPs only achieve when using many demonstrations for learning. All models outperform SEDS, which additionally proves to be difficult to fit. Furthermore we observe that TP-GMM and SEDS suffer from problems reaching the end-points of generalizations.These different quantitative results will help selecting the most appropriate models and designing trajectory representations in an improved task-dependent way in future robotic applications.