论文标题

利用理由来改善人类任务绩效

Leveraging Rationales to Improve Human Task Performance

论文作者

Das, Devleena, Chernova, Sonia

论文摘要

许多应用领域的机器学习(ML)系统越来越多地证明了超出人类的性能。为了响应这种模型的扩散,可解释的AI(XAI)领域试图开发技术来增强机器学习方法的透明度和解释性。在这项工作中,我们考虑了一个以前未在XAI和ML社区中探讨的问题:给定一个计算系统超过其人类用户的计算系统,可以利用可解释的AI功能来提高人类的性能?我们在国际象棋游戏的背景下研究这个问题,对于哪些计算机游戏引擎超过了普通玩家的性能。我们介绍了基本原理生成算法,这是一种自动化技术,用于生成基于公用事业的计算方法的理由,我们通过针对两个基准的多天用户研究对其进行评估。结果表明,我们的方法产生的理由会导致人类任务绩效的统计学上显着改善,这表明从AI的内部任务模型自动产生的理由不仅可以用来解释系统在做什么,还可以指导用户并最终改善其任务绩效。

Machine learning (ML) systems across many application areas are increasingly demonstrating performance that is beyond that of humans. In response to the proliferation of such models, the field of Explainable AI (XAI) has sought to develop techniques that enhance the transparency and interpretability of machine learning methods. In this work, we consider a question not previously explored within the XAI and ML communities: Given a computational system whose performance exceeds that of its human user, can explainable AI capabilities be leveraged to improve the performance of the human? We study this question in the context of the game of Chess, for which computational game engines that surpass the performance of the average player are widely available. We introduce the Rationale-Generating Algorithm, an automated technique for generating rationales for utility-based computational methods, which we evaluate with a multi-day user study against two baselines. The results show that our approach produces rationales that lead to statistically significant improvement in human task performance, demonstrating that rationales automatically generated from an AI's internal task model can be used not only to explain what the system is doing, but also to instruct the user and ultimately improve their task performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源