论文标题

走向强大的数据集学习

Towards Robust Dataset Learning

论文作者

Wu, Yihan, Li, Xinda, Kerschbaum, Florian, Huang, Heng, Zhang, Hongyang

论文摘要

在最近的计算机视觉研究中,对对抗性训练进行了积极研究,以提高模型的鲁棒性。但是,由于产生对抗样本的计算成本巨大,对抗训练方法通常很慢。在本文中,我们研究了学习强大的数据集的问题,以使数据集中自然培训的任何分类器在对抗方面都有坚固耐用。这样的数据集受益于下游任务,因为自然训练比对抗性训练要快得多,并证明了鲁棒性所需的属性在模型和数据之间可以转移。在这项工作中,我们提出了一个有原则的三级优化,以制定强大的数据集学习问题。我们表明,在表征强大与非持续特征的抽象模型下,所提出的方法可证明可以学习一个强大的数据集。对MNIST,CIFAR10和TINYIMAGENET进行了广泛的实验,可以通过不同的网络初始化和体系结构来解除我们算法的有效性。

Adversarial training has been actively studied in recent computer vision research to improve the robustness of models. However, due to the huge computational cost of generating adversarial samples, adversarial training methods are often slow. In this paper, we study the problem of learning a robust dataset such that any classifier naturally trained on the dataset is adversarially robust. Such a dataset benefits the downstream tasks as natural training is much faster than adversarial training, and demonstrates that the desired property of robustness is transferable between models and data. In this work, we propose a principled, tri-level optimization to formulate the robust dataset learning problem. We show that, under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset. Extensive experiments on MNIST, CIFAR10, and TinyImageNet demostrate the effectiveness of our algorithm with different network initializations and architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源