论文标题
重新调用班级区分特征的能力,以更好地了解它们
Reconnoitering the class distinguishing abilities of the features, to know them better
论文作者
论文摘要
机器学习(ML)在我们日常生活中的相关性与其解释性紧密相互交织。解释性可以使最终用户对ML计划的能力和实用性具有透明而人道的估算。它还将促进用户对系统自动决策的信心。解释变量或功能来解释模型的决定是当前时间的需要。我们真的找不到任何工作,这是根据其阶级延伸能力来解释功能(特别是当现实世界数据主要是多级自然的时候)。在任何给定的数据集中,功能都不擅长在数据点的不同可能的分类(或类)之间进行区分。在这项工作中,我们根据其班级或类别差异能力来解释这些功能。我们尤其估计了成对类组合的变量的类别差异能力(分数)。我们在几个现实世界中的多级数据集上经验验证了我们的方案给出的解释性。我们在潜在特征上下文中进一步利用了类固定分数,并提出了一种新颖的决策协议。这项工作的另一个新颖性在于\ emph {拒绝渲染决策}选项(测试点的潜在变量(测试点的)具有很高的类别可能性的潜力。
The relevance of machine learning (ML) in our daily lives is closely intertwined with its explainability. Explainability can allow end-users to have a transparent and humane reckoning of a ML scheme's capability and utility. It will also foster the user's confidence in the automated decisions of a system. Explaining the variables or features to explain a model's decision is a need of the present times. We could not really find any work, which explains the features on the basis of their class-distinguishing abilities (specially when the real world data are mostly of multi-class nature). In any given dataset, a feature is not equally good at making distinctions between the different possible categorizations (or classes) of the data points. In this work, we explain the features on the basis of their class or category-distinguishing capabilities. We particularly estimate the class-distinguishing capabilities (scores) of the variables for pair-wise class combinations. We validate the explainability given by our scheme empirically on several real-world, multi-class datasets. We further utilize the class-distinguishing scores in a latent feature context and propose a novel decision making protocol. Another novelty of this work lies with a \emph{refuse to render decision} option when the latent variable (of the test point) has a high class-distinguishing potential for the likely classes.