论文标题

对抗性攻击是毒害几击元学习者的惊人基线

Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners

论文作者

Oldewage, Elre T., Bronskill, John, Turner, Richard E.

论文摘要

本文研究了部署的几个元学习系统的鲁棒性,当时它们被喂食了不可感知的少量数据集时。我们攻击摊销的元学习者,这使我们能够制作一组串联的投入集,这些输入是为了欺骗系统的学习算法时,用作培训数据。可以预期,共同制作的对抗性输入可以协同操纵分类器,从而允许很难检测到的非常强大的数据供应攻击。我们表明,在白色盒子设置中,这些攻击非常成功,可能会导致目标模型的预测变得比机会更糟糕。但是,与一般而言的对抗示例的众所周知的可传递性相反,勾结集并不能很好地转移到不同的分类器上。我们探讨了两个假设,以解释这一点:攻击对攻击生成的模型与转移攻击的模型之间的不匹配。不管这些假设提出的缓解策略如何,综合输入转移都比以通常的方式独立生成的对抗输入更好。

This paper examines the robustness of deployed few-shot meta-learning systems when they are fed an imperceptibly perturbed few-shot dataset. We attack amortized meta-learners, which allows us to craft colluding sets of inputs that are tailored to fool the system's learning algorithm when used as training data. Jointly crafted adversarial inputs might be expected to synergistically manipulate a classifier, allowing for very strong data-poisoning attacks that would be hard to detect. We show that in a white box setting, these attacks are very successful and can cause the target model's predictions to become worse than chance. However, in opposition to the well-known transferability of adversarial examples in general, the colluding sets do not transfer well to different classifiers. We explore two hypotheses to explain this: 'overfitting' by the attack, and mismatch between the model on which the attack is generated and that to which the attack is transferred. Regardless of the mitigation strategies suggested by these hypotheses, the colluding inputs transfer no better than adversarial inputs that are generated independently in the usual way.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源