论文标题

从预训练的语言模型中提取基于图表的多语言选区解析

Multilingual Chart-based Constituency Parse Extraction from Pre-trained Language Models

论文作者

Kim, Taeuk, Li, Bowen, Lee, Sang-goo

论文摘要

正如已经揭示的那样,预训练的语言模型(PLM)在某种程度上能够识别自然语言的句法概念,因此已经努力开发一种从PLM中提取完整(二进制)解析的方法,而无需训练单独的解析器。我们通过提出一种基于图表的新方法和有效的TOP-K集合技术来改善这种范式。此外,我们证明我们可以扩大该方法在多语言设置中的应用范围。具体而言,我们表明,通过将我们的方法应用于多语言PLM,可以以综合和语言不合STASTIC的方式诱导来自九种语言的句子的非平凡解释,从而使性能优越或与未经监督的PCFG相当。我们还验证了我们的方法对跨语性转移是强大的。最后,我们提供了有关方法内部工作的分析。例如,我们发现了普遍的关注头,无论输入语言如何,对句法信息始终敏感。

As it has been unveiled that pre-trained language models (PLMs) are to some extent capable of recognizing syntactic concepts in natural language, much effort has been made to develop a method for extracting complete (binary) parses from PLMs without training separate parsers. We improve upon this paradigm by proposing a novel chart-based method and an effective top-K ensemble technique. Moreover, we demonstrate that we can broaden the scope of application of the approach into multilingual settings. Specifically, we show that by applying our method on multilingual PLMs, it becomes possible to induce non-trivial parses for sentences from nine languages in an integrated and language-agnostic manner, attaining performance superior or comparable to that of unsupervised PCFGs. We also verify that our approach is robust to cross-lingual transfer. Finally, we provide analyses on the inner workings of our method. For instance, we discover universal attention heads which are consistently sensitive to syntactic information irrespective of the input language.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源