论文标题
未来的洞察力持续学习
Insights from the Future for Continual Learning
论文作者
论文摘要
持续的学习旨在依次学习任务,而(通常)对旧学习样本的存储(通常是严重的)限制,而不会遭受灾难性遗忘。在这项工作中,我们建议在进行任何培训数据之前,提出了一种新型的实验环境,以纳入有关类的现有信息。通常,传统的持续学习环境中的每个任务都会在当前和过去的课程中评估模型,后者的培训样本数量有限。我们的设置增加了未来的课程,根本没有培训样本。我们介绍了Ghost Model,这是一种基于代表学习的模型,用于使用零照片学习中的思想进行持续学习。一致仔细调整损失的表示空间的生成模型使我们能够利用未来类别的见解来限制过去和当前类的空间布置。关于AWA2和AP \&Y数据集的定量结果以及详细的可视化介绍了这种新设置的兴趣以及我们建议解决该问题的方法。
Continual learning aims to learn tasks sequentially, with (often severe) constraints on the storage of old learning samples, without suffering from catastrophic forgetting. In this work, we propose prescient continual learning, a novel experimental setting, to incorporate existing information about the classes, prior to any training data. Usually, each task in a traditional continual learning setting evaluates the model on present and past classes, the latter with a limited number of training samples. Our setting adds future classes, with no training samples at all. We introduce Ghost Model, a representation-learning-based model for continual learning using ideas from zero-shot learning. A generative model of the representation space in concert with a careful adjustment of the losses allows us to exploit insights from future classes to constraint the spatial arrangement of the past and current classes. Quantitative results on the AwA2 and aP\&Y datasets and detailed visualizations showcase the interest of this new setting and the method we propose to address it.