论文标题

通过相对位置对图形生成的相对位置进行建模,从知识图

Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs

论文作者

Schmitt, Martin, Ribeiro, Leonardo F. R., Dufter, Philipp, Gurevych, Iryna, Schütze, Hinrich

论文摘要

我们提出了Graformer,这是一种基于变压器的新型编码器架构,用于图形到文本生成。借助我们新颖的图形自我注意,对节点的编码依赖于输入图中的所有节点 - 不仅是直接的邻居 - 促进了全局模式的检测。我们表示两个节点之间的关系是它们之间最短路径的长度。 Graformer学会了对不同注意力头的不同对这些节点节点的关系的加权,从而实际上学习了输入图的不同联系的视图。我们在两个流行的图形到文本生成基准(议程和WebNLG)上评估了Graformer,它在其中实现了强劲的性能,同时使用的参数少于其他方法。

We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation. With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源