论文标题
迈向终身自我划定,以进行未配对的图像到图像翻译
Towards Lifelong Self-Supervision For Unpaired Image-to-Image Translation
论文作者
论文摘要
未配对的图像到图像翻译(I2IT)任务通常缺乏数据,这一问题最近非常受欢迎,并且在解决方面非常成功。利用辅助任务,例如旋转预测或生成着色,SSL可以在低数据状态下产生更好,更健壮的表示。但是,随着模型大小和任务数量的增长,遵循I2IT任务的训练在I2IT任务上是可靠的。另一方面,学习可能会导致灾难性忘记以前学习的任务。为了减轻这一点,我们介绍了终生的自我选择(LISS),以此作为一种预先培训的I2IT模型(例如Cyclegan),以在一组自制的辅助任务上。通过保持过去编码器的指数移动平均值并提炼累积的知识,我们可以在许多任务上维护网络的验证性能,而无需任何形式的重播,参数隔离或再培训技术,通常用于持续学习。我们表明,接受LISS训练的模型在过去的任务上的表现更好,同时也比Cyclegan基线对颜色偏见和实体纠缠更强大(当两个实体非常接近时)。
Unpaired Image-to-Image Translation (I2IT) tasks often suffer from lack of data, a problem which self-supervised learning (SSL) has recently been very popular and successful at tackling. Leveraging auxiliary tasks such as rotation prediction or generative colorization, SSL can produce better and more robust representations in a low data regime. Training such tasks along an I2IT task is however computationally intractable as model size and the number of task grow. On the other hand, learning sequentially could incur catastrophic forgetting of previously learned tasks. To alleviate this, we introduce Lifelong Self-Supervision (LiSS) as a way to pre-train an I2IT model (e.g., CycleGAN) on a set of self-supervised auxiliary tasks. By keeping an exponential moving average of past encoders and distilling the accumulated knowledge, we are able to maintain the network's validation performance on a number of tasks without any form of replay, parameter isolation or retraining techniques typically used in continual learning. We show that models trained with LiSS perform better on past tasks, while also being more robust than the CycleGAN baseline to color bias and entity entanglement (when two entities are very close).