论文标题
信任XAI:对AI的模型不合时宜的解释,并进行了有关IIT安全的案例研究
TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security
论文作者
论文摘要
尽管AI显着增长,但其“黑匣子”性质在产生足够的信任方面造成了挑战。因此,它很少被用作物联网高危应用中的独立单位,例如关键的工业基础设施,医疗系统和财务应用等。可以解释的AI(XAI)已出现以帮助解决此问题。但是,设计适当快速准确的XAI仍然具有挑战性,尤其是在数值应用中。在这里,我们提出了一种依赖统计理论(Trust)的通用XAI模型,该模型是模型 - 静态,高性能且适合数值应用的模型。简而言之,信任XAI在基于AI的系统中对AI输出的统计行为进行建模。因子分析用于将输入特征转换为一组新的潜在变量。我们使用共同信息来对这些变量进行排名,并仅在AI的输出上选择最有影响力的变量,并将其称为“代表”。然后,我们使用多模式高斯分布来确定每个类别属于每个类别的新样本的可能性。我们使用三个不同的网络安全数据集在案例研究对工业互联网(IIOT)网络安全性的案例研究中的有效性。因为IIOT是一个涉及数值数据的突出应用。结果表明,Trust XAI提供了平均成功率为98%的新随机样本的解释。与Lime相比,在XAI模型中,在性能,速度和解释性方法的背景下,信任被证明是优越的。最后,我们还展示了如何向用户解释信任。
Despite AI's significant growth, its "black box" nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in IoT high-risk applications, such as critical industrial infrastructures, medical systems, and financial applications, etc. Explainable AI (XAI) has emerged to help with this problem. However, designing appropriately fast and accurate XAI is still challenging, especially in numerical applications. Here, we propose a universal XAI model named Transparency Relying Upon Statistical Theory (TRUST), which is model-agnostic, high-performing, and suitable for numerical applications. Simply put, TRUST XAI models the statistical behavior of the AI's outputs in an AI-based system. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information to rank these variables and pick only the most influential ones on the AI's outputs and call them "representatives" of the classes. Then we use multi-modal Gaussian distributions to determine the likelihood of any new sample belonging to each class. We demonstrate the effectiveness of TRUST in a case study on cybersecurity of the industrial Internet of things (IIoT) using three different cybersecurity datasets. As IIoT is a prominent application that deals with numerical data. The results show that TRUST XAI provides explanations for new random samples with an average success rate of 98%. Compared with LIME, a popular XAI model, TRUST is shown to be superior in the context of performance, speed, and the method of explainability. In the end, we also show how TRUST is explained to the user.