论文标题
通过深层可视化在持续学习中剖析灾难性遗忘
Dissecting Catastrophic Forgetting in Continual Learning by Deep Visualization
论文作者
论文摘要
解释深度神经网络的行为(通常被认为是黑匣子)至关重要,尤其是当它们在人类生活的各个方面被广泛采用时。从可解释的人工智能中汲取了进步,本文提出了一种称为自动deepvis的新技术,以剖析持续学习中的灾难性遗忘。在研究Auto Deepvis的困境时,还引入了一种处理灾难性遗忘的新方法。在字幕模型上进行的实验会精心介绍如何发生灾难性遗忘,尤其是显示哪些组件正在忘记或改变。然后评估我们技术的有效性;更确切地说,关键的冻结声称在基本线上和即将到来的任务上都表现出最佳的表现,证明了调查的能力。我们的技术不仅可以补充现有的解决方案,以完全消除对终身学习的灾难性遗忘,而且可以解释。
Interpreting the behaviors of Deep Neural Networks (usually considered as a black box) is critical especially when they are now being widely adopted over diverse aspects of human life. Taking the advancements from Explainable Artificial Intelligent, this paper proposes a novel technique called Auto DeepVis to dissect catastrophic forgetting in continual learning. A new method to deal with catastrophic forgetting named critical freezing is also introduced upon investigating the dilemma by Auto DeepVis. Experiments on a captioning model meticulously present how catastrophic forgetting happens, particularly showing which components are forgetting or changing. The effectiveness of our technique is then assessed; and more precisely, critical freezing claims the best performance on both previous and coming tasks over baselines, proving the capability of the investigation. Our techniques could not only be supplementary to existing solutions for completely eradicating catastrophic forgetting for life-long learning but also explainable.