论文标题

TalkTomodel:用交互式自然语言对话解释机器学习模型

TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations

论文作者

Slack, Dylan, Krishna, Satyapriya, Lakkaraju, Himabindu, Singh, Sameer

论文摘要

机器学习(ML)模型越来越多地用于在现实世界应用中做出关键决策,但是它们变得更加复杂,使它们更难理解。为此,研究人员提出了几种技术来解释模型预测。但是,从业者很难使用这些解释性技术,因为他们通常不知道选择哪一个以及如何解释解释结果。在这项工作中,我们通过引入TalkTomodel:一种交互式对话系统来解决这些挑战,用于通过对话来解释机器学习模型。具体而言,TalkTomodel由三个关键组成部分组成:1)一种自然语言接口,用于进行对话,使ML模型可解释性高度易于访问,2)适用于任何表格模型和数据集的对话引擎,将自然语言解释,将其映射到解释中,并将其映射到解释,并生成文本响应以及3)构建的执行组件,以构建解释。我们对TalkTomodel进行了广泛的定量和人类主题评估。总体而言,我们发现对话系统以高精度了解新颖数据集和模型上的用户输入,证明了该系统将其推广到新情况的能力。在与人类的实际评估中,有73%的医护人员(例如,医生和护士)同意,他们将使用TalkTomodel对基线的刻度和单击系统来解释疾病预测任务,其中85%的ML专业人员同意,TalkTomodel更容易用于计算说明。我们的发现表明,TalkTomodel比现有系统更有效,它为从业人员引入了新的解释性工具。在此处发布的代码和演示:https://github.com/dylan-slack/talktomodel。

Machine Learning (ML) models are increasingly used to make critical decisions in real-world applications, yet they have become more complex, making them harder to understand. To this end, researchers have proposed several techniques to explain model predictions. However, practitioners struggle to use these explainability techniques because they often do not know which one to choose and how to interpret the results of the explanations. In this work, we address these challenges by introducing TalkToModel: an interactive dialogue system for explaining machine learning models through conversations. Specifically, TalkToModel comprises of three key components: 1) a natural language interface for engaging in conversations, making ML model explainability highly accessible, 2) a dialogue engine that adapts to any tabular model and dataset, interprets natural language, maps it to appropriate explanations, and generates text responses, and 3) an execution component that constructs the explanations. We carried out extensive quantitative and human subject evaluations of TalkToModel. Overall, we found the conversational system understands user inputs on novel datasets and models with high accuracy, demonstrating the system's capacity to generalize to new situations. In real-world evaluations with humans, 73% of healthcare workers (e.g., doctors and nurses) agreed they would use TalkToModel over baseline point-and-click systems for explainability in a disease prediction task, and 85% of ML professionals agreed TalkToModel was easier to use for computing explanations. Our findings demonstrate that TalkToModel is more effective for model explainability than existing systems, introducing a new category of explainability tools for practitioners. Code & demo released here: https://github.com/dylan-slack/TalkToModel.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源