论文标题

对终身学习的抗retRoActive干扰

Anti-Retroactive Interference for Lifelong Learning

论文作者

Wang, Runqi, Bao, Yuxiang, Zhang, Baochang, Liu, Jianzhuang, Zhu, Wentao, Guo, Guodong

论文摘要

人类可以不断学习新知识。但是,在学习新任务后,机器学习模型在以前的任务上的性能急剧下降。认知科学指出,类似知识的竞争是遗忘的重要原因。在本文中,我们根据大脑的元学习和关联机制设计了一个用于终身学习的范式。它从两个方面解决了问题:提取知识和记忆知识。首先,我们通过背景攻击破坏了样本的背景分布,从而增强了模型以提取每个任务的关键特征。其次,根据增量知识和基础知识之间的相似性,我们设计了增量知识的自适应融合,这有助于模型分配能力,以将不同困难的知识分配到知识。从理论上讲,所提出的学习范式可以使不同任务的模型收敛到相同的最佳距离。提出的方法已在MNIST,CIFAR100,CUB200和ImagEnet100数据集上进行了验证。

Humans can continuously learn new knowledge. However, machine learning models suffer from drastic dropping in performance on previous tasks after learning new tasks. Cognitive science points out that the competition of similar knowledge is an important cause of forgetting. In this paper, we design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain. It tackles the problem from two aspects: extracting knowledge and memorizing knowledge. First, we disrupt the sample's background distribution through a background attack, which strengthens the model to extract the key features of each task. Second, according to the similarity between incremental knowledge and base knowledge, we design an adaptive fusion of incremental knowledge, which helps the model allocate capacity to the knowledge of different difficulties. It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum. The proposed method is validated on the MNIST, CIFAR100, CUB200 and ImageNet100 datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源