论文标题

解释自己!解释在人类机器人相互作用中的影响

Explain yourself! Effects of Explanations in Human-Robot Interaction

论文作者

Ambsdorf, Jakob, Munir, Alina, Wei, Yiyao, Degkwitz, Klaas, Harms, Harm Matthias, Stannek, Susanne, Ahrens, Kyra, Becker, Dennis, Strahl, Erik, Weber, Tom, Wermter, Stefan

论文摘要

可解释的人工智能的最新发展有望改变人类机器人互动的潜力:机器人决策的解释可能会影响用户的看法,证明其可靠性并提高信任。但是,尚未对解释其决定的人类对机器人的看法的影响进行彻底研究。为了分析可解释的机器人的效果,我们进行了一项研究,其中两个模拟机器人可以玩竞争性的棋盘游戏。当一个机器人解释其动作时,另一个机器人只宣布它们。提供有关其行为的解释不足以改变机器人的感知能力,智力,可爱性或安全等级。但是,结果表明,解释其动作的机器人被认为是更加生动和类似的。这项研究证明了可解释的人类机器人相互作用的必要性和潜力,以及对其效应作为新的研究方向的更广泛评估。

Recent developments in explainable artificial intelligence promise the potential to transform human-robot interaction: Explanations of robot decisions could affect user perceptions, justify their reliability, and increase trust. However, the effects on human perceptions of robots that explain their decisions have not been studied thoroughly. To analyze the effect of explainable robots, we conduct a study in which two simulated robots play a competitive board game. While one robot explains its moves, the other robot only announces them. Providing explanations for its actions was not sufficient to change the perceived competence, intelligence, likeability or safety ratings of the robot. However, the results show that the robot that explains its moves is perceived as more lively and human-like. This study demonstrates the need for and potential of explainable human-robot interaction and the wider assessment of its effects as a novel research direction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源