论文标题

对抗重量扰动有助于强大的概括

Adversarial Weight Perturbation Helps Robust Generalization

论文作者

Wu, Dongxian, Xia, Shu-tao, Wang, Yisen

论文摘要

近年来,关于改善深神经网络对对抗例子的鲁棒性的研究迅速增长。其中,对抗性训练是最有前途的训练,它通过对对抗性扰动的示例进行训练,使投入损失景观(相对于输入的损失变化)变平。但是,很少探索如何在对抗训练中进行广泛使用的减肥景观(减少体重的变化)。在本文中,我们从新的角度研究了减肥格局,并确定减肥景观的平坦度与稳定的概括差距之间的明显相关性。几种公认的对抗性训练的改进,例如早期停止,设计新的目标功能或利用未标记的数据,所有这些都暗中弄平了减肥景观。基于这些观察结果,我们提出了一个简单而有效的对抗重量扰动(AWP),以明确规范减肥景观的平坦度,在对抗性训练框架中形成双重扰动机制,以对抗性,以使投入和权重同时消失。广泛的实验表明,AWP确实带来了平坦的减肥景观,并且可以轻松地纳入各种现有的对抗训练方法中,以进一步增强其对抗性鲁棒性。

The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years. Among them, adversarial training is the most promising one, which flattens the input loss landscape (loss change with respect to input) via training on adversarially perturbed examples. However, how the widely used weight loss landscape (loss change with respect to weight) performs in adversarial training is rarely explored. In this paper, we investigate the weight loss landscape from a new perspective, and identify a clear correlation between the flatness of weight loss landscape and robust generalization gap. Several well-recognized adversarial training improvements, such as early stopping, designing new objective functions, or leveraging unlabeled data, all implicitly flatten the weight loss landscape. Based on these observations, we propose a simple yet effective Adversarial Weight Perturbation (AWP) to explicitly regularize the flatness of weight loss landscape, forming a double-perturbation mechanism in the adversarial training framework that adversarially perturbs both inputs and weights. Extensive experiments demonstrate that AWP indeed brings flatter weight loss landscape and can be easily incorporated into various existing adversarial training methods to further boost their adversarial robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源