论文标题
对抗性的示例好:对抗性示例指导性不平衡学习
Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning
论文作者
论文摘要
对抗性示例是由攻击者设计的机器学习模型的输入,以导致模型犯错误。在本文中,我们证明了对抗性例子也可以用来提高不平衡学习的表现。我们提供了有关如何处理不平衡数据的新观点:通过使用指导对抗性示例(GAE)训练来调整偏见的决策边界。我们的方法可以有效地提高少数群体的准确性,同时牺牲多数阶级的准确性很小。我们从经验上显示,在几个基准数据集上,我们提出的方法可与最新方法相媲美。据我们所知,我们是第一个通过对抗性例子来处理不平衡学习的人。
Adversarial examples are inputs for machine learning models that have been designed by attackers to cause the model to make mistakes. In this paper, we demonstrate that adversarial examples can also be utilized for good to improve the performance of imbalanced learning. We provide a new perspective on how to deal with imbalanced data: adjust the biased decision boundary by training with Guiding Adversarial Examples (GAEs). Our method can effectively increase the accuracy of minority classes while sacrificing little accuracy on majority classes. We empirically show, on several benchmark datasets, our proposed method is comparable to the state-of-the-art method. To our best knowledge, we are the first to deal with imbalanced learning with adversarial examples.