论文标题
对实体关系识别的语义损失应用
Semantic Loss Application to Entity Relation Recognition
论文作者
论文摘要
通常,实体关系识别系统要么使用管道衬里的模型,该模型将实体标记和关系标识视为单独的任务,也可以同时识别关系和实体的联合模型。本文比较了实体关系识别的这两种一般方法。最新的实体关系识别系统是使用深层复发的神经网络构建的,这些神经网络通常不会捕获问题中的象征知识或逻辑约束。本文的主要贡献是结合新型损失函数的关节实体关系提取的端到端神经模型。这种新颖的损失功能编码问题中的约束信息,以有效地指导模型培训。我们表明,在现有的典型损失功能中添加此损失函数对模型的性能有积极的影响。该模型是真正的端到端,不需要功能工程,并且易于扩展。已经进行了广泛的实验,以评估捕获自然语言理解的象征知识的重要性。观察到使用此损失函数的模型表现要优于其对应物,并且收敛速度更快。这项工作的实验结果表明,将此方法用于其他语言理解应用程序。
Usually, entity relation recognition systems either use a pipe-lined model that treats the entity tagging and relation identification as separate tasks or a joint model that simultaneously identifies the relation and entities. This paper compares these two general approaches for the entity relation recognition. State-of-the-art entity relation recognition systems are built using deep recurrent neural networks which often does not capture the symbolic knowledge or the logical constraints in the problem. The main contribution of this paper is an end-to-end neural model for joint entity relation extraction which incorporates a novel loss function. This novel loss function encodes the constraint information in the problem to guide the model training effectively. We show that addition of this loss function to the existing typical loss functions has a positive impact over the performance of the models. This model is truly end-to-end, requires no feature engineering and easily extensible. Extensive experimentation has been conducted to evaluate the significance of capturing symbolic knowledge for natural language understanding. Models using this loss function are observed to be outperforming their counterparts and converging faster. Experimental results in this work suggest the use of this methodology for other language understanding applications.