论文标题

SIMC:域增量在线持续分段的仿真

SimCS: Simulation for Domain Incremental Online Continual Segmentation

论文作者

Alfarra, Motasem, Cai, Zhipeng, Bibi, Adel, Ghanem, Bernard, Müller, Matthias

论文摘要

持续学习是迈向终身情报的一步,模型在不忘记以前的知识的情况下不断地从最近收集的数据中学习。现有的持续学习方法主要集中在具有清晰任务边界和无限计算预算的课堂内设置中的图像分类上。这项工作探讨了在线域内收入持续分割(ODICS)的问题,在该问题中,该模型经过不断地训练来自不同域的密集标记图像的批次,并且计算有限且没有有关任务边界的信息。在许多实际应用中出现了ODICS。在自主驾驶中,这可能与一系列城市训练分割模型的现实情况相对应。我们分析了几种现有的持续学习方法,并表明尽管在类分段中运作良好,但它们在这种环境中的表现不佳。我们提出了SIMC,这是一种与现有方法互补的无参数方法,该方法使用模拟数据正规化持续学习。实验表明,与不同的CL方法结合使用,SIMC可以提供一致的改进。

Continual Learning is a step towards lifelong intelligence where models continuously learn from recently collected data without forgetting previous knowledge. Existing continual learning approaches mostly focus on image classification in the class-incremental setup with clear task boundaries and unlimited computational budget. This work explores the problem of Online Domain-Incremental Continual Segmentation (ODICS), where the model is continually trained over batches of densely labeled images from different domains, with limited computation and no information about the task boundaries. ODICS arises in many practical applications. In autonomous driving, this may correspond to the realistic scenario of training a segmentation model over time on a sequence of cities. We analyze several existing continual learning methods and show that they perform poorly in this setting despite working well in class-incremental segmentation. We propose SimCS, a parameter-free method complementary to existing ones that uses simulated data to regularize continual learning. Experiments show that SimCS provides consistent improvements when combined with different CL methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源