论文标题

作为专家的事实:符号知识的适应性和可解释的神经记忆

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

论文作者

Verga, Pat, Sun, Haitian, Soares, Livio Baldini, Cohen, William W.

论文摘要

大规模的语言模型是现代NLP建模的核心,已被证明可以编码令人印象深刻的常识和事实信息。但是,这些知识仅存在于模型的潜在参数内,无法检验和解释,甚至更糟糕的是,随着世界的变化,从培训语料库中记住的事实信息可能会变得陈旧。作为参数存储的知识也将不可避免地表现出源材料固有的所有偏见。为了解决这些问题,我们开发了一个神经语言模型,其中包括象征性地解释的事实信息和下符号神经知识之间的明确界面。我们表明,该模型极大地提高了两个知识密集的提问任务的绩效。更有趣的是,该模型可以通过操纵其符号表示形式而不必重新训练而进行更新。特别是,此模型使我们能够以早期模型无法实现的方式添加新事实并覆盖现有事实。

Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information. However, that knowledge exists only within the latent parameters of the model, inaccessible to inspection and interpretation, and even worse, factual information memorized from the training corpora is likely to become stale as the world changes. Knowledge stored as parameters will also inevitably exhibit all of the biases inherent in the source materials. To address these problems, we develop a neural language model that includes an explicit interface between symbolically interpretable factual information and subsymbolic neural knowledge. We show that this model dramatically improves performance on two knowledge-intensive question-answering tasks. More interestingly, the model can be updated without re-training by manipulating its symbolic representations. In particular this model allows us to add new facts and overwrite existing ones in ways that are not possible for earlier models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源