论文标题

通过全球原型增强持续学习:抵消负面表示漂移

Enhancing Continual Learning with Global Prototypes: Counteracting Negative Representation Drift

论文作者

Bai, Xueying, Shang, Jinghuan, Sun, Yifan, Balasubramanian, Niranjan

论文摘要

持续学习(CL)旨在随着时间的推移学习一系列任务,数据分布从一个任务转移到另一个任务。在对新任务数据进行培训时,旧任务的数据表示可能会漂移。通过导致本地学到的类原型和数据表示形式使整个任务之间的相关性很差,因此一些负面表示漂移可能会导致灾难性的遗忘。为了减轻这种表示形式漂移,我们提出了一种方法,该方法可以找到全局原型来指导学习,并通过自我监督信息的正规化学习数据表示。具体来说,对于NLP任务,我们以掩盖语言建模样式制定每个任务,并通过邻居注意机制通过预训练的语言模型来学习任务。实验结果表明,我们提出的方法可以学习相当一致的表示,而表示较少的表示,并显着减少CL中的灾难性遗忘,而无需重新采样了过去的任务。

Continual learning (CL) aims to learn a sequence of tasks over time, with data distributions shifting from one task to another. When training on new task data, data representations from old tasks may drift. Some negative representation drift can result in catastrophic forgetting, by causing the locally learned class prototypes and data representations to correlate poorly across tasks. To mitigate such representation drift, we propose a method that finds global prototypes to guide the learning, and learns data representations with the regularization of the self-supervised information. Specifically, for NLP tasks, we formulate each task in a masked language modeling style, and learn the task via a neighbor attention mechanism over a pre-trained language model. Experimental results show that our proposed method can learn fairly consistent representations with less representation drift, and significantly reduce catastrophic forgetting in CL without resampling data from past tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源