论文标题
最佳的持续学习具有完美的记忆,并且是NP-HARD
Optimal Continual Learning has Perfect Memory and is NP-hard
论文作者
论文摘要
持续学习(CL)算法会逐步学习跨多个顺序观察到的任务的预测因子或表示。设计可靠地执行并避免所谓灾难性遗忘的CL算法证明了持续的挑战。当前的论文开发了一种理论方法来解释原因。特别是,我们得出了CL算法必须具有的计算特性,以避免灾难性的遗忘。我们的主要发现是,这种最佳的CL算法通常可以解决NP硬性问题,并且需要完美的内存才能这样做。这些发现具有理论上的兴趣,但也使用经验重播,情节记忆和核心集相对于基于正则化的方法来解释CL算法的出色性能。
Continual Learning (CL) algorithms incrementally learn a predictor or representation across multiple sequentially observed tasks. Designing CL algorithms that perform reliably and avoid so-called catastrophic forgetting has proven a persistent challenge. The current paper develops a theoretical approach that explains why. In particular, we derive the computational properties which CL algorithms would have to possess in order to avoid catastrophic forgetting. Our main finding is that such optimal CL algorithms generally solve an NP-hard problem and will require perfect memory to do so. The findings are of theoretical interest, but also explain the excellent performance of CL algorithms using experience replay, episodic memory and core sets relative to regularization-based approaches.