论文标题

对抗性的增量学习

Adversarial Incremental Learning

论文作者

Singh, Ankur

论文摘要

尽管深度学习在各种任务中都表现出色,但它仍然遭受灾难性的遗忘 - 神经网络倾向于在学习以前的数据的新任务时忘记以前学习的信息。早期的增量学习方法通​​过使用旧数据集的一部分,生成示例或使用内存网络来解决此问题。虽然,这些方法已显示出良好的结果,但使用示例或生成它们,会增加内存和计算要求。为了解决这些问题,我们提出了一种基于对抗性歧视的方法,该方法在培训新任务时根本不利用旧数据。我们特别解决图像分类中的类增量学习问题,其中以基于类的顺序方式提供数据。对于这个问题,网络是使用对抗性损失以及传统的跨透镜损失对网络进行训练的。横向渗透损失有助于网络逐步学习新课程,而对抗性损失有助于保存有关现有类的信息。使用这种方法,我们能够在CIFAR-100,SVHN和MNIST数据集上胜过其他最先进的方法。

Although deep learning performs really well in a wide variety of tasks, it still suffers from catastrophic forgetting -- the tendency of neural networks to forget previously learned information upon learning new tasks where previous data is not available. Earlier methods of incremental learning tackle this problem by either using a part of the old dataset, by generating exemplars or by using memory networks. Although, these methods have shown good results but using exemplars or generating them, increases memory and computation requirements. To solve these problems we propose an adversarial discriminator based method that does not make use of old data at all while training on new tasks. We particularly tackle the class incremental learning problem in image classification, where data is provided in a class-based sequential manner. For this problem, the network is trained using an adversarial loss along with the traditional cross-entropy loss. The cross-entropy loss helps the network progressively learn new classes while the adversarial loss helps in preserving information about the existing classes. Using this approach, we are able to outperform other state-of-the-art methods on CIFAR-100, SVHN, and MNIST datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源