论文标题
持续的本地替代品,用于几次学习
Continual Local Replacement for Few-shot Learning
论文作者
论文摘要
几次学习的目的是学习一个模型,该模型可以根据一个或几个培训数据识别新颖的课程。这主要是由于两个方面的挑战:(1)它缺乏新颖阶层的良好特征代表; (2)一些标记的数据不能准确地表示真实的数据分布,因此很难学习一个良好的决策功能以进行分类。在这项工作中,我们使用复杂的网络体系结构来学习更好的功能表示,并专注于第二期。提出了一种新颖的局部替代策略来解决数据缺陷问题。它利用未标记图像中的内容不断增强标签图像。具体而言,采用了伪标记方法来不断地选择语义上相似的图像。原始标记的图像将在下一个时期训练中被选定的图像当地替换。通过这种方式,该模型可以直接从未标记的图像中学习新的语义信息,并且可以大大扩大嵌入空间中监督信号的能力。这使模型可以改善概括并学习更好的分类决策边界。我们的方法在概念上是简单易用的。广泛的实验表明,它可以在各种少量图像识别基准上实现最新结果。
The goal of few-shot learning is to learn a model that can recognize novel classes based on one or few training data. It is challenging mainly due to two aspects: (1) it lacks good feature representation of novel classes; (2) a few of labeled data could not accurately represent the true data distribution and thus it's hard to learn a good decision function for classification. In this work, we use a sophisticated network architecture to learn better feature representation and focus on the second issue. A novel continual local replacement strategy is proposed to address the data deficiency problem. It takes advantage of the content in unlabeled images to continually enhance labeled ones. Specifically, a pseudo labeling method is adopted to constantly select semantically similar images on the fly. Original labeled images will be locally replaced by the selected images for the next epoch training. In this way, the model can directly learn new semantic information from unlabeled images and the capacity of supervised signals in the embedding space can be significantly enlarged. This allows the model to improve generalization and learn a better decision boundary for classification. Our method is conceptually simple and easy to implement. Extensive experiments demonstrate that it can achieve state-of-the-art results on various few-shot image recognition benchmarks.