论文标题
常规语法诱导的神经模型
A Neural Model for Regular Grammar Induction
论文作者
论文摘要
语法推断是计算学习理论中的一个经典问题,也是自然语言处理中更广泛影响的话题。我们将语法视为一种计算模型,并提出了一种新型的神经方法来诱导正规语法,从正面和负面示例中。我们的模型是完全可以解释的,其中间结果可直接解释为部分分析,并且可以在提供足够的数据时将其用于学习任意的常规语法。我们发现我们的方法始终在各种复杂性的测试中始终获得较高的召回和精度得分。
Grammatical inference is a classical problem in computational learning theory and a topic of wider influence in natural language processing. We treat grammars as a model of computation and propose a novel neural approach to induction of regular grammars from positive and negative examples. Our model is fully explainable, its intermediate results are directly interpretable as partial parses, and it can be used to learn arbitrary regular grammars when provided with sufficient data. We find that our method consistently attains high recall and precision scores across a range of tests of varying complexity.