论文标题

多方面的上下文表示使用双重注意本体对齐

Multifaceted Context Representation using Dual Attention for Ontology Alignment

论文作者

Iyer, Vivek, Agarwal, Arvind, Kumar, Harshit

论文摘要

本体对准是一个重要的研究问题,它在各个领域中找到了应用,例如数据集成,数据传输,数据制备等。本体学对齐中的最新架构(SOTA)体系结构通常使用手工制作的规则和手动分配的值使用幼稚的域依赖性方法,使它们无法分配,使其不可计算和不计。深度学习方法的深度学习方法使用领域特定的体系结构不仅对其他数据集和域不可公开,而且通常由于各种限制,包括模型过度,数据集的过度限制,在这项工作中,我们提出的基于素养的模型,该模型使用构想的上下文来构成对偶然的构想,通常会比基于规则的方法更糟糕。通过这样做,我们的方法不仅可以利用本体论的句法和语义结构,而且还可以通过设计柔性且可扩展到不同领域的努力。我们从不同域和多语言设置中验证各种数据集上的方法,并显示出比SOTA方法出色的性能。

Ontology Alignment is an important research problem that finds application in various fields such as data integration, data transfer, data preparation etc. State-of-the-art (SOTA) architectures in Ontology Alignment typically use naive domain-dependent approaches with handcrafted rules and manually assigned values, making them unscalable and inefficient. Deep Learning approaches for ontology alignment use domain-specific architectures that are not only in-extensible to other datasets and domains, but also typically perform worse than rule-based approaches due to various limitations including over-fitting of models, sparsity of datasets etc. In this work, we propose VeeAlign, a Deep Learning based model that uses a dual-attention mechanism to compute the contextualized representation of a concept in order to learn alignments. By doing so, not only does our approach exploit both syntactic and semantic structure of ontologies, it is also, by design, flexible and scalable to different domains with minimal effort. We validate our approach on various datasets from different domains and in multilingual settings, and show its superior performance over SOTA methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源