论文标题
强大的近距离分布检测的对抗性脆弱性
Adversarial vulnerability of powerful near out-of-distribution detection
论文作者
论文摘要
最近在神经网络中检测出分布(OOD)的输入方面取得了重大进展,这主要是由于在大型数据集上预计的大型模型以及新兴的多模式使用。即使是当前最强的OOD检测技术,我们也显示出严重的对抗脆弱性。使用对输入像素的小而有针对性的扰动,我们可以将图像分配从分配到外部分配,反之亦然。特别是,我们在OOD CIFAR-100与CIFAR-10任务以及远处的OOD CIFAR-100与SVH的方面表现出严重的对抗性脆弱性。我们研究了几种后加工技术的对抗性鲁棒性,包括最大的软马克斯概率(MSP)的简单基线,Mahalanobis距离以及新提出的\ textit \ textit {相对} Mahalanobis距离。通过比较在各种扰动强度下OOD检测性能的丧失,我们证明了使用OOD检测器的集合的有益效果,以及使用\ textIt {相对} Mahalanobis距离在其他后处理方法上的使用。此外,我们还表明,即使使用夹子和多模式的强零射OOD检测也受到严重缺乏对抗性鲁棒性的影响。我们的代码可从https://github.com/stanislavfort/Adversaries_to_ood_detection获得
There has been a significant progress in detecting out-of-distribution (OOD) inputs in neural networks recently, primarily due to the use of large models pretrained on large datasets, and an emerging use of multi-modality. We show a severe adversarial vulnerability of even the strongest current OOD detection techniques. With a small, targeted perturbation to the input pixels, we can change the image assignment from an in-distribution to an out-distribution, and vice versa, easily. In particular, we demonstrate severe adversarial vulnerability on the challenging near OOD CIFAR-100 vs CIFAR-10 task, as well as on the far OOD CIFAR-100 vs SVHN. We study the adversarial robustness of several post-processing techniques, including the simple baseline of Maximum of Softmax Probabilities (MSP), the Mahalanobis distance, and the newly proposed \textit{Relative} Mahalanobis distance. By comparing the loss of OOD detection performance at various perturbation strengths, we demonstrate the beneficial effect of using ensembles of OOD detectors, and the use of the \textit{Relative} Mahalanobis distance over other post-processing methods. In addition, we show that even strong zero-shot OOD detection using CLIP and multi-modality suffers from a severe lack of adversarial robustness as well. Our code is available at https://github.com/stanislavfort/adversaries_to_OOD_detection