论文标题

隔离和公正的聚集:增量学习的范式而不会干涉

Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference

论文作者

Wang, Yabin, Ma, Zhiheng, Huang, Zhiwu, Wang, Yaowei, Su, Zhou, Hong, Xiaopeng

论文摘要

本文着重于增量学习阶段中普遍的表现失衡。为了避免明显的舞台学习瓶颈,我们提出了一个基于崭新的舞台隔离性递增学习框架,该框架利用一系列阶段分离的分类器来执行每个阶段的学习任务,而不会受到他人干扰。要保持混凝土,要公正地将多个阶段分类器汇总为均匀的分类器,我们首先引入了一个温度控制的能量指标,以指示阶段分类器的置信度得分水平。然后,我们提出了一种基于锚的能量自称策略,以确保舞台分类器在相同的能量水平上工作。最后,我们为强大的推理设计了基于投票的推理增强策略。所提出的方法是免费的,几乎可以在所有持续学习的情况下工作。我们在四个大型基准上评估了所提出的方法。广泛的结果表明,提出的方法在建立新的最先进的整体绩效方面具有优势。 \ emph {代码可在} \ url {https://github.com/iamwangyabin/esn}中获得。

This paper focuses on the prevalent performance imbalance in the stages of incremental learning. To avoid obvious stage learning bottlenecks, we propose a brand-new stage-isolation based incremental learning framework, which leverages a series of stage-isolated classifiers to perform the learning task of each stage without the interference of others. To be concrete, to aggregate multiple stage classifiers as a uniform one impartially, we first introduce a temperature-controlled energy metric for indicating the confidence score levels of the stage classifiers. We then propose an anchor-based energy self-normalization strategy to ensure the stage classifiers work at the same energy level. Finally, we design a voting-based inference augmentation strategy for robust inference. The proposed method is rehearsal free and can work for almost all continual learning scenarios. We evaluate the proposed method on four large benchmarks. Extensive results demonstrate the superiority of the proposed method in setting up new state-of-the-art overall performance. \emph{Code is available at} \url{https://github.com/iamwangyabin/ESN}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源