论文标题
通过荟萃分类的几个无监督的持续学习
Few-Shot Unsupervised Continual Learning through Meta-Examples
论文作者
论文摘要
在实际应用程序中,数据不能反映出通常用于神经网络训练的数据,因为它们通常很少,未标记,并且可以作为流提供。因此,许多现有的深度学习解决方案遭受了有限的应用范围,尤其是在在线流媒体数据随时间发展的情况下。为了缩小这一差距,在这项工作中,我们介绍了一个新颖而复杂的环境,涉及无监督的元学习学习和不平衡的任务。这些任务是通过应用于拟合嵌入空间的聚类过程构建的。我们利用一种元学习计划,同时减轻了灾难性的遗忘,并有利于对新任务的概括。此外,为了鼓励在元优化期间的功能重复使用,我们利用单个内部环路利用通过使用自我注意机制实现的汇总表示。与监督案例相比,几乎没有射击学习基准的实验结果也显示出竞争性能。此外,我们从经验上观察到,在无监督的情况下,小任务和集群汇总的可变性在网络的概括能力中起着至关重要的作用。此外,在复杂的数据集上,即使在通过全面监督下获得的类别相比,相比,比实际类别的簇要比实际的类别更高的结果,这表明将预定义的分区分为类可能会错过相关的结构信息。
In real-world applications, data do not reflect the ones commonly used for neural networks training, since they are usually few, unlabeled and can be available as a stream. Hence many existing deep learning solutions suffer from a limited range of applications, in particular in the case of online streaming data that evolve over time. To narrow this gap, in this work we introduce a novel and complex setting involving unsupervised meta-continual learning with unbalanced tasks. These tasks are built through a clustering procedure applied to a fitted embedding space. We exploit a meta-learning scheme that simultaneously alleviates catastrophic forgetting and favors the generalization to new tasks. Moreover, to encourage feature reuse during the meta-optimization, we exploit a single inner loop taking advantage of an aggregated representation achieved through the use of a self-attention mechanism. Experimental results on few-shot learning benchmarks show competitive performance even compared to the supervised case. Additionally, we empirically observe that in an unsupervised scenario, the small tasks and the variability in the clusters pooling play a crucial role in the generalization capability of the network. Further, on complex datasets, the exploitation of more clusters than the true number of classes leads to higher results, even compared to the ones obtained with full supervision, suggesting that a predefined partitioning into classes can miss relevant structural information.