论文标题
ML专家会讨论AI系统的解释性吗?该行业的讨论案例针对特定领域的解决方案
Do ML Experts Discuss Explainability for AI Systems? A discussion case in the industry for a domain-specific solution
论文作者
论文摘要
对于所有希望在其行业中脱颖而出的公司,人工智能(AI)工具在不同领域的应用都变得必须。成功应用AI的一个主要挑战是将机器学习(ML)专业知识与域知识相结合,以使用AI工具获得最佳结果。领域专家对数据以及如何影响他们的决策有了解。 ML专家可以使用基于AI的工具来处理大量数据并为域专家生成见解。但是,如果没有对数据的深入了解,ML专家将无法调整其模型以获得特定领域的最佳结果。因此,域专家是ML工具的关键用户,而这些AI工具的解释性成为这种情况下的重要功能。为不同情况,用户和目标研究AI的解释性有很多努力。在该立场论文中,我们讨论了有关ML专家如何在定义为特定领域开发的ML工具的特征时表达对AI解释性的关注的有趣发现。我们分析了进行两次头脑风暴会议的数据,以讨论ML工具的功能,以支持地球科学家(域专家)在分析使用ML资源分析地震数据(特定领域数据)的功能。
The application of Artificial Intelligence (AI) tools in different domains are becoming mandatory for all companies wishing to excel in their industries. One major challenge for a successful application of AI is to combine the machine learning (ML) expertise with the domain knowledge to have the best results applying AI tools. Domain specialists have an understanding of the data and how it can impact their decisions. ML experts have the ability to use AI-based tools dealing with large amounts of data and generating insights for domain experts. But without a deep understanding of the data, ML experts are not able to tune their models to get optimal results for a specific domain. Therefore, domain experts are key users for ML tools and the explainability of those AI tools become an essential feature in that context. There are a lot of efforts to research AI explainability for different contexts, users and goals. In this position paper, we discuss interesting findings about how ML experts can express concerns about AI explainability while defining features of an ML tool to be developed for a specific domain. We analyze data from two brainstorm sessions done to discuss the functionalities of an ML tool to support geoscientists (domain experts) on analyzing seismic data (domain-specific data) with ML resources.