论文标题

量化基于树的分类器对自然扭曲的概率鲁棒性鲁棒性

Quantifying probabilistic robustness of tree-based classifiers against natural distortions

论文作者

Schweimer, Christoph, Scher, Sebastian

论文摘要

最近,值得信赖的AI概念引起了广泛的关注。与值得信赖的AI相关的方面之一是ML模型的鲁棒性。在这项研究中,我们展示了如何在基于树的分类器的自然发生数据的自然发生扭曲中概率地量化鲁棒性,假设可以通过可以通过多变量概率分布来描述自然畸变,这些分布可以转化为多元正常分布。这个想法是提取训练有素的基于树的分类器的决策规则,将特征空间分离为非重叠区域,并确定带有失真的数据样本返回其预测标签的概率。该方法基于最近引入的现实世界固定性的度量,该度量适用于所有黑匣子分类器,但仅是一个近似值,仅当输入维度不太高时才起作用,而我们提出的方法给出了精确的度量。

The concept of trustworthy AI has gained widespread attention lately. One of the aspects relevant to trustworthy AI is robustness of ML models. In this study, we show how to probabilistically quantify robustness against naturally occurring distortions of input data for tree-based classifiers under the assumption that the natural distortions can be described by multivariate probability distributions that can be transformed to multivariate normal distributions. The idea is to extract the decision rules of a trained tree-based classifier, separate the feature space into non-overlapping regions and determine the probability that a data sample with distortion returns its predicted label. The approach is based on the recently introduced measure of real-world-robustness, which works for all black box classifiers, but is only an approximation and only works if the input dimension is not too high, whereas our proposed method gives an exact measure.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源