论文标题

测试时间增加和一致性评估的异常检测

Anomaly Detection with Test Time Augmentation and Consistency Evaluation

论文作者

He, Haowei, Teng, Jiaye, Yuan, Yang

论文摘要

深度神经网络很容易受到看不见的数据的影响:它们可能错误地将高置信度分配给了外部样本。最近的工作试图使用表示学习方法和特定指标来解决问题。在本文中,我们提出了一种名为“测试时间增强异常检测(TTA-AD)”的简单而有效的事后异常检测算法,灵感来自新的观察结果。具体而言,我们观察到,分布数据对其在训练的网络上的原始版本和增强版本的预测比近分分配数据更加一致,后者将分布和分布样本分开。在各种高分辨率图像基准数据集上进行的实验表明,TTA-AD在DataSet-VS-Dataset异常检测设置下实现了可比或更好的检测性能,其基于分类器的算法的现有运行时间为60%〜90 \%〜90 \%。我们提供的经验验证表明,TTA-AD的关键在于增强功能之间的其余类别,而这些特征长期以来一直被以前的作品所忽略。此外,我们将运行作为替代物来分析理论上的算法。

Deep neural networks are known to be vulnerable to unseen data: they may wrongly assign high confidence stcores to out-distribuion samples. Recent works try to solve the problem using representation learning methods and specific metrics. In this paper, we propose a simple, yet effective post-hoc anomaly detection algorithm named Test Time Augmentation Anomaly Detection (TTA-AD), inspired by a novel observation. Specifically, we observe that in-distribution data enjoy more consistent predictions for its original and augmented versions on a trained network than out-distribution data, which separates in-distribution and out-distribution samples. Experiments on various high-resolution image benchmark datasets demonstrate that TTA-AD achieves comparable or better detection performance under dataset-vs-dataset anomaly detection settings with a 60%~90\% running time reduction of existing classifier-based algorithms. We provide empirical verification that the key to TTA-AD lies in the remaining classes between augmented features, which has long been partially ignored by previous works. Additionally, we use RUNS as a surrogate to analyze our algorithm theoretically.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源