论文标题
投影映射实现:启用感知结果的直接外部化和动作意图以提高机器人解释性
Projection Mapping Implementation: Enabling Direct Externalization of Perception Results and Action Intent to Improve Robot Explainability
论文作者
论文摘要
现有关于非语言提示的研究,例如眼睛凝视或手臂运动,可能无法准确地呈现机器人的内部状态,例如感知结果和行动意图。将各州直接投射到机器人的操作环境中具有直接,准确和更突出的优势,从而消除了对机器人意图的心理推断。但是,与既定的运动计划库(例如MoveIt)相比,缺乏用于机器人技术的投影映射工具。在本文中,我们详细介绍了投影映射的实施,以使研究人员和从业者能够在机器人和人类之间进行更好的互动来推动边界。我们还为github上的示例操纵投影映射提供了实用的文档和代码:https://github.com/uml-robotics/proctions/proctiond_mapping。
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states such as perception results and action intent. Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient, eliminating mental inference about the robot's intention. However, there is a lack of tools for projection mapping in robotics, compared to established motion planning libraries (e.g., MoveIt). In this paper, we detail the implementation of projection mapping to enable researchers and practitioners to push the boundaries for better interaction between robots and humans. We also provide practical documentation and code for a sample manipulation projection mapping on GitHub: https://github.com/uml-robotics/projection_mapping.