论文标题

通过记忆推理:最近的邻居知识图嵌入

Reasoning Through Memorization: Nearest Neighbor Knowledge Graph Embeddings

论文作者

Wang, Peng, Xie, Xin, Wang, Xiaohan, Zhang, Ningyu

论文摘要

以前的知识图嵌入方法通常将实体映射到表示形式,并利用分数功能来预测目标实体,但它们通常很难推论稀有或新兴的看不见的实体。在本文中,我们提出了KNN-KGE,这是一种使用预训练的语言模型的新知识图嵌入方法,通过线性插值其实体分布与K-Neartialt Neighbors的实体分布。我们根据实体嵌入到知识店的空间的距离来计算最近的邻居。我们的方法可以使稀有或新兴实体在模型参数中明确而不是隐含地记忆。实验结果表明,我们的方法可以改善感应性和转导性链路预测结果,并为仅使用几个三元组的低资源设置而产生更好的性能,这可能会通过显式记忆更易于推理。代码可在https://github.com/zjunlp/knn-kg上找到。

Previous knowledge graph embedding approaches usually map entities to representations and utilize score functions to predict the target entities, yet they typically struggle to reason rare or emerging unseen entities. In this paper, we propose kNN-KGE, a new knowledge graph embedding approach with pre-trained language models, by linearly interpolating its entity distribution with k-nearest neighbors. We compute the nearest neighbors based on the distance in the entity embedding space from the knowledge store. Our approach can allow rare or emerging entities to be memorized explicitly rather than implicitly in model parameters. Experimental results demonstrate that our approach can improve inductive and transductive link prediction results and yield better performance for low-resource settings with only a few triples, which might be easier to reason via explicit memory. Code is available at https://github.com/zjunlp/KNN-KG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源