论文标题
持续学习的记忆范围
Memory Bounds for Continual Learning
论文作者
论文摘要
持续学习或终身学习是对机器学习的巨大挑战。它要求学习者求助于$ k $不同的学习任务的序列,同时保留其对早期任务的能力。持续的学习者应该比为每个$ K $任务中的每个任务开发和维护单独的学习者的明显解决方案的扩展。我们开始对PAC框架中持续学习的复杂性理论研究。我们对沟通复杂性进行了新颖的用途,以确定任何连续的学习者,即使是一个不当的学习者,都需要以$ k $线性增长的记忆,这强烈表明问题是棘手的。当对数上允许许多通过学习任务时,我们根据多重权重更新提供了一种算法,其内存需求范围很好。我们还确定学习对于这种表现是必要的。我们猜想这些结果可能会导致持续学习的新方法。
Continual learning, or lifelong learning, is a formidable current challenge to machine learning. It requires the learner to solve a sequence of $k$ different learning tasks, one after the other, while retaining its aptitude for earlier tasks; the continual learner should scale better than the obvious solution of developing and maintaining a separate learner for each of the $k$ tasks. We embark on a complexity-theoretic study of continual learning in the PAC framework. We make novel uses of communication complexity to establish that any continual learner, even an improper one, needs memory that grows linearly with $k$, strongly suggesting that the problem is intractable. When logarithmically many passes over the learning tasks are allowed, we provide an algorithm based on multiplicative weights update whose memory requirement scales well; we also establish that improper learning is necessary for such performance. We conjecture that these results may lead to new promising approaches to continual learning.